Test Report: Docker_Linux_containerd 15232

                    
                      0194cc3582ecd25a736ac3660bc9effa677f982b:2022-11-01:26370
                    
                

Test fail (5/277)

Order failed test Duration
205 TestPreload 356.66
213 TestKubernetesUpgrade 583.5
314 TestNetworkPlugins/group/calico/Start 528.75
331 TestNetworkPlugins/group/bridge/DNS 364.93
334 TestNetworkPlugins/group/enable-default-cni/DNS 302.76
x
+
TestPreload (356.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (50.604250072s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-230809 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-230809 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.711754438s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6
E1101 23:09:22.269750   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 23:09:42.407261   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:12:32.185694   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 23:12:59.224603   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 23:13:55.363507   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m0.539248776s)

                                                
                                                
-- stdout --
	* [test-preload-230809] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node test-preload-230809 in cluster test-preload-230809
	* Pulling base image ...
	* Downloading Kubernetes v1.24.6 preload ...
	* Updating the running docker "test-preload-230809" container ...
	* Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	* Configuring CNI (Container Networking Interface) ...
	X Problems detected in kubelet:
	  Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441    4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	  Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486    4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	  Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778    4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 23:09:02.101256  127145 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:09:02.101369  127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:09:02.101380  127145 out.go:309] Setting ErrFile to fd 2...
	I1101 23:09:02.101385  127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:09:02.101473  127145 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:09:02.101987  127145 out.go:303] Setting JSON to false
	I1101 23:09:02.102936  127145 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3088,"bootTime":1667341054,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 23:09:02.102996  127145 start.go:126] virtualization: kvm guest
	I1101 23:09:02.105803  127145 out.go:177] * [test-preload-230809] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 23:09:02.107347  127145 notify.go:220] Checking for updates...
	I1101 23:09:02.108879  127145 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 23:09:02.110538  127145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 23:09:02.112123  127145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:09:02.113662  127145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 23:09:02.115184  127145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 23:09:02.116881  127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1101 23:09:02.118764  127145 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1101 23:09:02.120144  127145 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 23:09:02.148923  127145 docker.go:137] docker version: linux-20.10.21
	I1101 23:09:02.149004  127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:09:02.241848  127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.16794253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:09:02.241979  127145 docker.go:254] overlay module found
	I1101 23:09:02.245118  127145 out.go:177] * Using the docker driver based on existing profile
	I1101 23:09:02.246572  127145 start.go:282] selected driver: docker
	I1101 23:09:02.246590  127145 start.go:808] validating driver "docker" against &{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-230809 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:09:02.246667  127145 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 23:09:02.247466  127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:09:02.338554  127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.266470239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:09:02.338791  127145 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 23:09:02.338813  127145 cni.go:95] Creating CNI manager for ""
	I1101 23:09:02.338820  127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:09:02.338831  127145 start_flags.go:317] config:
	{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:09:02.341335  127145 out.go:177] * Starting control plane node test-preload-230809 in cluster test-preload-230809
	I1101 23:09:02.342819  127145 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 23:09:02.344289  127145 out.go:177] * Pulling base image ...
	I1101 23:09:02.345773  127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1101 23:09:02.345854  127145 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 23:09:02.367470  127145 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 23:09:02.367494  127145 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 23:09:02.456956  127145 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1101 23:09:02.456979  127145 cache.go:57] Caching tarball of preloaded images
	I1101 23:09:02.457299  127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1101 23:09:02.459387  127145 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1101 23:09:02.460985  127145 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1101 23:09:02.574127  127145 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1101 23:09:07.458996  127145 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1101 23:09:07.459100  127145 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1101 23:09:08.389256  127145 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1101 23:09:08.389384  127145 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/config.json ...
	I1101 23:09:08.389578  127145 cache.go:208] Successfully downloaded all kic artifacts
	I1101 23:09:08.389617  127145 start.go:364] acquiring machines lock for test-preload-230809: {Name:mke051021b2965b04875f4fe9250ee1fc48098e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 23:09:08.389726  127145 start.go:368] acquired machines lock for "test-preload-230809" in 76.094µs
	I1101 23:09:08.389751  127145 start.go:96] Skipping create...Using existing machine configuration
	I1101 23:09:08.389762  127145 fix.go:55] fixHost starting: 
	I1101 23:09:08.390003  127145 cli_runner.go:164] Run: docker container inspect test-preload-230809 --format={{.State.Status}}
	I1101 23:09:08.411982  127145 fix.go:103] recreateIfNeeded on test-preload-230809: state=Running err=<nil>
	W1101 23:09:08.412027  127145 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 23:09:08.414797  127145 out.go:177] * Updating the running docker "test-preload-230809" container ...
	I1101 23:09:08.416264  127145 machine.go:88] provisioning docker machine ...
	I1101 23:09:08.416295  127145 ubuntu.go:169] provisioning hostname "test-preload-230809"
	I1101 23:09:08.416338  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:08.439734  127145 main.go:134] libmachine: Using SSH client type: native
	I1101 23:09:08.440024  127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1101 23:09:08.440069  127145 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-230809 && echo "test-preload-230809" | sudo tee /etc/hostname
	I1101 23:09:08.562938  127145 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-230809
	
	I1101 23:09:08.563010  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:08.585385  127145 main.go:134] libmachine: Using SSH client type: native
	I1101 23:09:08.585561  127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1101 23:09:08.585590  127145 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-230809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-230809/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-230809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 23:09:08.698901  127145 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 23:09:08.698934  127145 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
	I1101 23:09:08.698966  127145 ubuntu.go:177] setting up certificates
	I1101 23:09:08.698978  127145 provision.go:83] configureAuth start
	I1101 23:09:08.699037  127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
	I1101 23:09:08.721518  127145 provision.go:138] copyHostCerts
	I1101 23:09:08.721585  127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
	I1101 23:09:08.721599  127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
	I1101 23:09:08.721689  127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
	I1101 23:09:08.721805  127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
	I1101 23:09:08.721820  127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
	I1101 23:09:08.721860  127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
	I1101 23:09:08.721933  127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
	I1101 23:09:08.721947  127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
	I1101 23:09:08.721984  127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
	I1101 23:09:08.722065  127145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.test-preload-230809 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-230809]
	I1101 23:09:09.342668  127145 provision.go:172] copyRemoteCerts
	I1101 23:09:09.342737  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 23:09:09.342788  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.365869  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.450803  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 23:09:09.467332  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 23:09:09.484069  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 23:09:09.500288  127145 provision.go:86] duration metric: configureAuth took 801.291693ms
	I1101 23:09:09.500314  127145 ubuntu.go:193] setting minikube options for container-runtime
	I1101 23:09:09.500489  127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1101 23:09:09.500504  127145 machine.go:91] provisioned docker machine in 1.084227489s
	I1101 23:09:09.500512  127145 start.go:300] post-start starting for "test-preload-230809" (driver="docker")
	I1101 23:09:09.500518  127145 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 23:09:09.500574  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 23:09:09.500612  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.523524  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.606420  127145 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 23:09:09.608955  127145 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 23:09:09.608997  127145 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 23:09:09.609008  127145 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 23:09:09.609014  127145 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 23:09:09.609026  127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
	I1101 23:09:09.609074  127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
	I1101 23:09:09.609141  127145 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
	I1101 23:09:09.609211  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 23:09:09.615422  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:09:09.632348  127145 start.go:303] post-start completed in 131.826095ms
	I1101 23:09:09.632431  127145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:09:09.632484  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.655572  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.739833  127145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 23:09:09.743685  127145 fix.go:57] fixHost completed within 1.353918347s
	I1101 23:09:09.743711  127145 start.go:83] releasing machines lock for "test-preload-230809", held for 1.353965858s
	I1101 23:09:09.743793  127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
	I1101 23:09:09.766548  127145 ssh_runner.go:195] Run: systemctl --version
	I1101 23:09:09.766597  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.766663  127145 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1101 23:09:09.766716  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.792264  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.792322  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.888741  127145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 23:09:09.898412  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 23:09:09.907129  127145 docker.go:189] disabling docker service ...
	I1101 23:09:09.907178  127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 23:09:09.916127  127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 23:09:09.924535  127145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 23:09:10.021637  127145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 23:09:10.121893  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 23:09:10.130949  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 23:09:10.143348  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1101 23:09:10.150803  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1101 23:09:10.158084  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1101 23:09:10.165427  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1101 23:09:10.172620  127145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 23:09:10.178500  127145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 23:09:10.184228  127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:09:10.274591  127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:09:10.352393  127145 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 23:09:10.352463  127145 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 23:09:10.357122  127145 start.go:472] Will wait 60s for crictl version
	I1101 23:09:10.357191  127145 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:09:10.392488  127145 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-01T23:09:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1101 23:09:21.439528  127145 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:09:21.462449  127145 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1101 23:09:21.462510  127145 ssh_runner.go:195] Run: containerd --version
	I1101 23:09:21.484971  127145 ssh_runner.go:195] Run: containerd --version
	I1101 23:09:21.509013  127145 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I1101 23:09:21.510580  127145 cli_runner.go:164] Run: docker network inspect test-preload-230809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:09:21.532621  127145 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1101 23:09:21.536061  127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1101 23:09:21.536135  127145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:09:21.558771  127145 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1101 23:09:21.558833  127145 ssh_runner.go:195] Run: which lz4
	I1101 23:09:21.561739  127145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 23:09:21.564671  127145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1101 23:09:21.564695  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1101 23:09:22.512481  127145 containerd.go:496] Took 0.950765 seconds to copy over tarball
	I1101 23:09:22.512539  127145 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 23:09:25.309553  127145 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.796992099s)
	I1101 23:09:25.309668  127145 containerd.go:503] Took 2.797150 seconds t extract the tarball
	I1101 23:09:25.309687  127145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 23:09:25.324395  127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:09:25.422371  127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:09:25.510170  127145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:09:25.538232  127145 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 23:09:25.538307  127145 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:25.538343  127145 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:25.538380  127145 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:25.538401  127145 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:25.538410  127145 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I1101 23:09:25.538365  127145 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:25.538347  127145 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:25.538380  127145 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:25.539377  127145 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:25.539486  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:25.539520  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:25.539552  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:25.539747  127145 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I1101 23:09:25.540025  127145 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:25.540223  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:25.540448  127145 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:25.987285  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I1101 23:09:25.999857  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I1101 23:09:26.002925  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I1101 23:09:26.009305  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I1101 23:09:26.050246  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I1101 23:09:26.065466  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I1101 23:09:26.075511  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I1101 23:09:26.363138  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 23:09:26.825611  127145 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1101 23:09:26.825704  127145 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I1101 23:09:26.825763  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:26.922091  127145 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1101 23:09:26.922201  127145 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:26.922266  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:26.935023  127145 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I1101 23:09:26.935049  127145 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I1101 23:09:26.935073  127145 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:26.935157  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:26.935073  127145 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:26.935237  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.033281  127145 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1101 23:09:27.033386  127145 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:27.033448  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.118607  127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.053106276s)
	I1101 23:09:27.197931  127145 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I1101 23:09:27.118727  127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.043182812s)
	I1101 23:09:27.145553  127145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 23:09:27.198012  127145 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:27.198041  127145 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:27.198067  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.198114  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.145664  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I1101 23:09:27.145702  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:27.145736  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:27.145736  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:27.145776  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:27.197981  127145 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I1101 23:09:27.198282  127145 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:27.198319  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:28.633346  127145 ssh_runner.go:235] Completed: which crictl: (1.435002706s)
	I1101 23:09:28.633407  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:28.633499  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.435244347s)
	I1101 23:09:28.633520  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I1101 23:09:28.633558  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.435295917s)
	I1101 23:09:28.633570  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I1101 23:09:28.633630  127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1101 23:09:28.633718  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.435492576s)
	I1101 23:09:28.633737  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I1101 23:09:28.633801  127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1101 23:09:28.633883  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.435647522s)
	I1101 23:09:28.633895  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I1101 23:09:28.633934  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.43573031s)
	I1101 23:09:28.633961  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I1101 23:09:28.633997  127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1101 23:09:28.634036  127145 ssh_runner.go:235] Completed: which crictl: (1.435871833s)
	I1101 23:09:28.634053  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:28.634098  127145 ssh_runner.go:235] Completed: which crictl: (1.436023391s)
	I1101 23:09:28.634122  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:28.778449  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 23:09:28.778478  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I1101 23:09:28.778546  127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:09:28.778569  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1101 23:09:28.778584  127145 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1101 23:09:28.778593  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1101 23:09:28.778618  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I1101 23:09:28.778652  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1101 23:09:28.779903  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I1101 23:09:28.781996  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 23:09:36.182104  127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.403463536s)
	I1101 23:09:36.182144  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I1101 23:09:36.182176  127145 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1101 23:09:36.182237  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I1101 23:09:38.315093  127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (2.132819455s)
	I1101 23:09:38.315128  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I1101 23:09:38.315167  127145 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I1101 23:09:38.315245  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I1101 23:09:38.532314  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I1101 23:09:38.532357  127145 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:09:38.532411  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:09:39.739922  127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.207479048s)
	I1101 23:09:39.739955  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 23:09:39.740004  127145 cache_images.go:92] LoadImages completed in 14.201748543s
	W1101 23:09:39.740191  127145 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6: no such file or directory
	I1101 23:09:39.740259  127145 ssh_runner.go:195] Run: sudo crictl info
	I1101 23:09:39.816714  127145 cni.go:95] Creating CNI manager for ""
	I1101 23:09:39.816751  127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:09:39.816770  127145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 23:09:39.816787  127145 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-230809 NodeName:test-preload-230809 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 23:09:39.816973  127145 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-230809"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 23:09:39.817109  127145 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-230809 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 23:09:39.817179  127145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1101 23:09:39.826621  127145 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 23:09:39.826677  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 23:09:39.835648  127145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I1101 23:09:39.916772  127145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 23:09:39.932259  127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I1101 23:09:39.947304  127145 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 23:09:39.950835  127145 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809 for IP: 192.168.67.2
	I1101 23:09:39.950959  127145 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
	I1101 23:09:39.951010  127145 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
	I1101 23:09:39.951103  127145 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key
	I1101 23:09:39.951220  127145 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key.c7fa3a9e
	I1101 23:09:39.951278  127145 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key
	I1101 23:09:39.951418  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
	W1101 23:09:39.951461  127145 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
	I1101 23:09:39.951476  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 23:09:39.951510  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
	I1101 23:09:39.951551  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
	I1101 23:09:39.951584  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
	I1101 23:09:39.951640  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:09:39.952459  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 23:09:40.018330  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 23:09:40.038985  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 23:09:40.059337  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 23:09:40.127519  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 23:09:40.147768  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 23:09:40.216763  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 23:09:40.238171  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 23:09:40.265559  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
	I1101 23:09:40.332847  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
	I1101 23:09:40.354317  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 23:09:40.414264  127145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 23:09:40.430591  127145 ssh_runner.go:195] Run: openssl version
	I1101 23:09:40.436602  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
	I1101 23:09:40.445840  127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
	I1101 23:09:40.449377  127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:50 /usr/share/ca-certificates/12840.pem
	I1101 23:09:40.449430  127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
	I1101 23:09:40.456569  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
	I1101 23:09:40.464390  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
	I1101 23:09:40.514612  127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
	I1101 23:09:40.518320  127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:50 /usr/share/ca-certificates/128402.pem
	I1101 23:09:40.518385  127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
	I1101 23:09:40.524764  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 23:09:40.533275  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 23:09:40.542165  127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:09:40.545871  127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:09:40.545917  127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:09:40.550867  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 23:09:40.558550  127145 kubeadm.go:396] StartCluster: {Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:09:40.558652  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 23:09:40.558703  127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:09:40.637065  127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
	I1101 23:09:40.637096  127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
	I1101 23:09:40.637108  127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
	I1101 23:09:40.637121  127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
	I1101 23:09:40.637131  127145 cri.go:87] found id: ""
	I1101 23:09:40.637166  127145 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1101 23:09:40.735629  127145 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5/rootfs","created":"2022-11-01T23:08:58.356227997Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","pid":2147,"st
atus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1/rootfs","created":"2022-11-01T23:08:50.712751348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","pid":1508,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424/rootfs","created":"2022-11-01T23:08:30.466593305Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","pid":2189,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62/rootfs","created":"2022-11-01T23:08:50.775829242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","pid":1631,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468/rootfs","created":"2022-11-01T23:08:30.715212813Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994/rootfs","created":"2022-11-01T23:08:50.930366595Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","pid":3276,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931/rootfs","created":"2022-11-01T23:09:28.020513803Z","annotations":{"io.kubernetes.cri.container-type":"sandbox",
"io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","pid":2566,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45/rootfs","created":"2022-11-01T23:08:58.223128026Z","annotations":{"io.kubernetes.cri.conta
iner-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","pid":3285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1/rootfs","created":"2022-11-01T23:09:28.02269692Z","annotations":{"io.kubernet
es.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","pid":3536,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463/rootfs","created":"2022-11-01T23:09:29.
630532491Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","pid":2426,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be/rootfs","created":
"2022-11-01T23:08:54.212636774Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","pid":1503,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8/rootfs","created":"2022-11-01T23:08:30.4665045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","
io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","pid":3584,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05/rootfs","created":"2022-11-01T23:09:29.729675697Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.san
dbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","pid":1507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad/rootfs","created":"2022-11-01T23:08:30.46654145Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubern
etes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6/rootfs","created":"2022-11-01T23:08:58.356220401Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"c
ontainer","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","pid":1630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a/rootfs","created":"2022-11-01T23:08:30.715566758Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.k
ubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16/rootfs","created":"2022-11-01T23:08:30.71207489Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","pid":3660,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8/rootfs","created":"2022-11-01T23:09:31.863802538Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","pid":3466,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18
151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7/rootfs","created":"2022-11-01T23:09:29.524514538Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","pid":1504,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69
909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f/rootfs","created":"2022-11-01T23:08:30.466601473Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","pid":1632,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993/rootfs","created":"2022-11-01T23:08:30.715174165Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","pid":3538,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a
524265b0003fa3f0aa/rootfs","created":"2022-11-01T23:09:29.63434432Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","pid":3546,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460
b949272bba5/rootfs","created":"2022-11-01T23:09:29.633496847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","pid":3283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d
9cce/rootfs","created":"2022-11-01T23:09:28.022341914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1/rootfs",
"created":"2022-11-01T23:08:58.221992861Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1101 23:09:40.736083  127145 cri.go:124] list returned 25 containers
	I1101 23:09:40.736101  127145 cri.go:127] container: {ID:12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 Status:running}
	I1101 23:09:40.736119  127145 cri.go:129] skipping 12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 - not in ps
	I1101 23:09:40.736130  127145 cri.go:127] container: {ID:25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 Status:running}
	I1101 23:09:40.736144  127145 cri.go:129] skipping 25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 - not in ps
	I1101 23:09:40.736156  127145 cri.go:127] container: {ID:4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 Status:running}
	I1101 23:09:40.736169  127145 cri.go:129] skipping 4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 - not in ps
	I1101 23:09:40.736180  127145 cri.go:127] container: {ID:57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 Status:running}
	I1101 23:09:40.736192  127145 cri.go:129] skipping 57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 - not in ps
	I1101 23:09:40.736204  127145 cri.go:127] container: {ID:6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 Status:running}
	I1101 23:09:40.736221  127145 cri.go:129] skipping 6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 - not in ps
	I1101 23:09:40.736232  127145 cri.go:127] container: {ID:7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 Status:running}
	I1101 23:09:40.736240  127145 cri.go:129] skipping 7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 - not in ps
	I1101 23:09:40.736246  127145 cri.go:127] container: {ID:84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 Status:running}
	I1101 23:09:40.736255  127145 cri.go:129] skipping 84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 - not in ps
	I1101 23:09:40.736266  127145 cri.go:127] container: {ID:8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 Status:running}
	I1101 23:09:40.736278  127145 cri.go:129] skipping 8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 - not in ps
	I1101 23:09:40.736289  127145 cri.go:127] container: {ID:969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 Status:running}
	I1101 23:09:40.736300  127145 cri.go:129] skipping 969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 - not in ps
	I1101 23:09:40.736305  127145 cri.go:127] container: {ID:9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 Status:running}
	I1101 23:09:40.736313  127145 cri.go:129] skipping 9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 - not in ps
	I1101 23:09:40.736320  127145 cri.go:127] container: {ID:9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be Status:running}
	I1101 23:09:40.736333  127145 cri.go:129] skipping 9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be - not in ps
	I1101 23:09:40.736343  127145 cri.go:127] container: {ID:bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 Status:running}
	I1101 23:09:40.736355  127145 cri.go:129] skipping bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 - not in ps
	I1101 23:09:40.736366  127145 cri.go:127] container: {ID:c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 Status:running}
	I1101 23:09:40.736378  127145 cri.go:129] skipping c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 - not in ps
	I1101 23:09:40.736388  127145 cri.go:127] container: {ID:cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad Status:running}
	I1101 23:09:40.736397  127145 cri.go:129] skipping cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad - not in ps
	I1101 23:09:40.736411  127145 cri.go:127] container: {ID:cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 Status:running}
	I1101 23:09:40.736429  127145 cri.go:129] skipping cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 - not in ps
	I1101 23:09:40.736440  127145 cri.go:127] container: {ID:da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a Status:running}
	I1101 23:09:40.736458  127145 cri.go:129] skipping da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a - not in ps
	I1101 23:09:40.736470  127145 cri.go:127] container: {ID:dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 Status:running}
	I1101 23:09:40.736483  127145 cri.go:129] skipping dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 - not in ps
	I1101 23:09:40.736493  127145 cri.go:127] container: {ID:dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 Status:running}
	I1101 23:09:40.736502  127145 cri.go:133] skipping {dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 running}: state = "running", want "paused"
	I1101 23:09:40.736517  127145 cri.go:127] container: {ID:dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 Status:running}
	I1101 23:09:40.736530  127145 cri.go:129] skipping dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 - not in ps
	I1101 23:09:40.736541  127145 cri.go:127] container: {ID:e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f Status:running}
	I1101 23:09:40.736553  127145 cri.go:129] skipping e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f - not in ps
	I1101 23:09:40.736564  127145 cri.go:127] container: {ID:e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 Status:running}
	I1101 23:09:40.736576  127145 cri.go:129] skipping e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 - not in ps
	I1101 23:09:40.736586  127145 cri.go:127] container: {ID:ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa Status:running}
	I1101 23:09:40.736594  127145 cri.go:129] skipping ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa - not in ps
	I1101 23:09:40.736603  127145 cri.go:127] container: {ID:f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 Status:running}
	I1101 23:09:40.736615  127145 cri.go:129] skipping f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 - not in ps
	I1101 23:09:40.736625  127145 cri.go:127] container: {ID:f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce Status:running}
	I1101 23:09:40.736636  127145 cri.go:129] skipping f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce - not in ps
	I1101 23:09:40.736643  127145 cri.go:127] container: {ID:f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 Status:running}
	I1101 23:09:40.736658  127145 cri.go:129] skipping f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 - not in ps
	I1101 23:09:40.736704  127145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 23:09:40.745646  127145 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 23:09:40.745673  127145 kubeadm.go:627] restartCluster start
	I1101 23:09:40.745722  127145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 23:09:40.753726  127145 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:09:40.754368  127145 kubeconfig.go:92] found "test-preload-230809" server: "https://192.168.67.2:8443"
	I1101 23:09:40.755237  127145 kapi.go:59] client config for test-preload-230809: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key", CAFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 23:09:40.755875  127145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 23:09:40.763523  127145 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-11-01 23:08:26.955661256 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-11-01 23:09:39.941360162 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1101 23:09:40.763543  127145 kubeadm.go:1114] stopping kube-system containers ...
	I1101 23:09:40.763556  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1101 23:09:40.763603  127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:09:40.843646  127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
	I1101 23:09:40.843681  127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
	I1101 23:09:40.843693  127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
	I1101 23:09:40.843703  127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
	I1101 23:09:40.843711  127145 cri.go:87] found id: ""
	I1101 23:09:40.843719  127145 cri.go:232] Stopping containers: [e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8]
	I1101 23:09:40.843770  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:40.847856  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8
	I1101 23:09:41.335259  127145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 23:09:41.402860  127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:09:41.410490  127145 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  1 23:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  1 23:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Nov  1 23:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  1 23:08 /etc/kubernetes/scheduler.conf
	
	I1101 23:09:41.410554  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 23:09:41.417229  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 23:09:41.423830  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 23:09:41.430364  127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:09:41.430410  127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 23:09:41.436788  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 23:09:41.442864  127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:09:41.442915  127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 23:09:41.448988  127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:09:41.455288  127145 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 23:09:41.455307  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:41.753172  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:42.645331  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:43.006957  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:43.058116  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:43.137338  127145 api_server.go:51] waiting for apiserver process to appear ...
	I1101 23:09:43.137438  127145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:09:43.218088  127145 api_server.go:71] duration metric: took 80.740751ms to wait for apiserver process to appear ...
	I1101 23:09:43.218119  127145 api_server.go:87] waiting for apiserver healthz status ...
	I1101 23:09:43.218133  127145 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 23:09:43.223783  127145 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 23:09:43.231489  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:43.231532  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:43.733092  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:43.733125  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:44.233705  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:44.233731  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:44.733150  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:44.733179  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:45.233717  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:45.233749  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W1101 23:09:45.732040  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:46.233010  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:46.732501  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:47.232636  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:47.732455  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:48.232934  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:48.732964  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:49.232994  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I1101 23:09:52.022667  127145 api_server.go:140] control plane version: v1.24.6
	I1101 23:09:52.022755  127145 api_server.go:130] duration metric: took 8.804626822s to wait for apiserver health ...
	I1101 23:09:52.022776  127145 cni.go:95] Creating CNI manager for ""
	I1101 23:09:52.022793  127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:09:52.025189  127145 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 23:09:52.026860  127145 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 23:09:52.033655  127145 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I1101 23:09:52.033680  127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1101 23:09:52.223817  127145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 23:09:52.990696  127145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 23:09:52.997505  127145 system_pods.go:59] 8 kube-system pods found
	I1101 23:09:52.997541  127145 system_pods.go:61] "coredns-6d4b75cb6d-r4qft" [93ea1e43-1509-4751-a91c-ee8a9f43f870] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 23:09:52.997551  127145 system_pods.go:61] "etcd-test-preload-230809" [af6823c1-4191-4b7b-b864-c8d4dc5b60b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 23:09:52.997561  127145 system_pods.go:61] "kindnet-55wll" [18a63bc3-b29d-45a5-98a8-3f37cfef3c7b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 23:09:52.997568  127145 system_pods.go:61] "kube-apiserver-test-preload-230809" [7c4baec2-c5b0-4a19-b41f-c54723a6cb9d] Pending
	I1101 23:09:52.997578  127145 system_pods.go:61] "kube-controller-manager-test-preload-230809" [61a6d202-4552-4719-bfd5-7e9295cc25b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 23:09:52.997598  127145 system_pods.go:61] "kube-proxy-mprfx" [c323cc25-2fa6-4edf-b36c-03da66892a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 23:09:52.997611  127145 system_pods.go:61] "kube-scheduler-test-preload-230809" [ae2815cc-6736-4e49-b3c8-8abeaeeea1bd] Pending
	I1101 23:09:52.997623  127145 system_pods.go:61] "storage-provisioner" [2eb4b78f-b029-431c-a5b6-34253c21c6ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 23:09:52.997635  127145 system_pods.go:74] duration metric: took 6.918381ms to wait for pod list to return data ...
	I1101 23:09:52.997648  127145 node_conditions.go:102] verifying NodePressure condition ...
	I1101 23:09:52.999970  127145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 23:09:53.000003  127145 node_conditions.go:123] node cpu capacity is 8
	I1101 23:09:53.000015  127145 node_conditions.go:105] duration metric: took 2.358425ms to run NodePressure ...
	I1101 23:09:53.000039  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:53.234562  127145 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1101 23:09:53.237990  127145 kubeadm.go:778] kubelet initialised
	I1101 23:09:53.238014  127145 kubeadm.go:779] duration metric: took 3.422089ms waiting for restarted kubelet to initialise ...
	I1101 23:09:53.238022  127145 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:09:53.242529  127145 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
	I1101 23:09:55.254763  127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
	I1101 23:09:57.753901  127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
	I1101 23:09:59.754592  127145 pod_ready.go:92] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"True"
	I1101 23:09:59.754626  127145 pod_ready.go:81] duration metric: took 6.512068179s waiting for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
	I1101 23:09:59.754639  127145 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
	I1101 23:10:01.766834  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:04.264410  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:06.764726  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:09.264989  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:11.265205  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:13.763952  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:15.764164  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:17.764732  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:19.764997  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:22.264415  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:24.764449  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:27.264094  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:29.264748  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:31.764914  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:34.264280  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:36.264981  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:38.765185  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:41.265088  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:43.764636  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:46.265617  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:48.765111  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:51.264670  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:53.264916  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:55.264961  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:57.265052  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:59.764621  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:02.264841  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:04.264932  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:06.764687  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:09.265413  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:11.764819  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:13.765227  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:16.264738  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:18.265154  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:20.764475  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:22.765142  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:25.264490  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:27.265182  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:29.764395  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:31.764559  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:33.765136  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:36.264759  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:38.265094  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:40.764500  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:43.264843  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:45.765686  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:48.264476  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:50.764617  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:52.764701  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:54.765115  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:56.765316  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:59.264346  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:01.264372  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:03.264546  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:05.264956  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:07.764171  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:09.764397  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:11.765095  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:14.264701  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:16.265440  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:18.764276  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:20.764938  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:23.265330  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:25.764449  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:27.764895  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:30.265410  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:32.767373  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:35.265081  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:37.765063  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:40.265350  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:42.765270  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:45.265267  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:47.765107  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:50.265576  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:52.766477  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:55.264930  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:57.765153  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:00.264148  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:02.264609  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:04.265195  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:06.764397  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:08.765157  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:11.264073  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:13.264819  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:15.763483  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:17.763881  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:19.765072  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:21.765183  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:24.265085  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:26.764936  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:29.264520  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:31.265339  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:33.764859  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:36.265232  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:38.764507  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:40.764906  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:42.764962  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:44.765506  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:47.264257  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:49.265001  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:51.765200  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:54.264162  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:56.264864  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:58.764509  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:59.759267  127145 pod_ready.go:81] duration metric: took 4m0.004604004s waiting for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
	E1101 23:13:59.759292  127145 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" (will not retry!)
	I1101 23:13:59.759322  127145 pod_ready.go:38] duration metric: took 4m6.521288423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:13:59.759354  127145 kubeadm.go:631] restartCluster took 4m19.013673069s
	W1101 23:13:59.759521  127145 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 23:13:59.759560  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:14:01.430467  127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.670884606s)
	I1101 23:14:01.430528  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:14:01.440216  127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:14:01.447136  127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:14:01.447183  127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:14:01.453660  127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:14:01.453703  127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:14:01.491674  127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1101 23:14:01.491746  127145 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:14:01.518815  127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:14:01.518891  127145 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:14:01.518924  127145 kubeadm.go:317] OS: Linux
	I1101 23:14:01.519001  127145 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:14:01.519091  127145 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:14:01.519162  127145 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:14:01.519232  127145 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:14:01.519307  127145 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:14:01.519381  127145 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:14:01.519458  127145 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:14:01.519533  127145 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:14:01.519591  127145 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:14:01.591526  127145 kubeadm.go:317] W1101 23:14:01.486750    6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:14:01.591829  127145 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:14:01.591936  127145 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:14:01.592005  127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1101 23:14:01.592050  127145 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1101 23:14:01.592096  127145 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1101 23:14:01.592196  127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1101 23:14:01.592269  127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 23:14:01.592495  127145 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.486750    6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.486750    6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 23:14:01.592536  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:14:01.906961  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:14:01.916443  127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:14:01.916504  127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:14:01.923130  127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:14:01.923166  127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:14:01.960923  127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1101 23:14:01.960981  127145 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:14:01.987846  127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:14:01.987918  127145 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:14:01.987961  127145 kubeadm.go:317] OS: Linux
	I1101 23:14:01.988021  127145 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:14:01.988074  127145 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:14:01.988115  127145 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:14:01.988186  127145 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:14:01.988241  127145 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:14:01.988304  127145 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:14:01.988371  127145 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:14:01.988430  127145 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:14:01.988521  127145 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:14:02.056387  127145 kubeadm.go:317] W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:14:02.056585  127145 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:14:02.056677  127145 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:14:02.056739  127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1101 23:14:02.056775  127145 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1101 23:14:02.056811  127145 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1101 23:14:02.056904  127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1101 23:14:02.057006  127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 23:14:02.057085  127145 kubeadm.go:398] StartCluster complete in 4m21.498557806s
	I1101 23:14:02.057126  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:14:02.057181  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:14:02.079779  127145 cri.go:87] found id: ""
	I1101 23:14:02.079803  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.079811  127145 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:14:02.079820  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:14:02.079867  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:14:02.102132  127145 cri.go:87] found id: ""
	I1101 23:14:02.103963  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.103974  127145 logs.go:276] No container was found matching "etcd"
	I1101 23:14:02.103987  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:14:02.104037  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:14:02.127250  127145 cri.go:87] found id: ""
	I1101 23:14:02.127271  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.127278  127145 logs.go:276] No container was found matching "coredns"
	I1101 23:14:02.127282  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:14:02.127329  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:14:02.149764  127145 cri.go:87] found id: ""
	I1101 23:14:02.149785  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.149792  127145 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:14:02.149799  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:14:02.149851  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:14:02.172459  127145 cri.go:87] found id: ""
	I1101 23:14:02.172482  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.172488  127145 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:14:02.172493  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:14:02.172532  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:14:02.194215  127145 cri.go:87] found id: ""
	I1101 23:14:02.194240  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.194246  127145 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:14:02.194252  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:14:02.194295  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:14:02.215924  127145 cri.go:87] found id: ""
	I1101 23:14:02.215945  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.215951  127145 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:14:02.215961  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:14:02.216007  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:14:02.237525  127145 cri.go:87] found id: ""
	I1101 23:14:02.237548  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.237556  127145 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:14:02.237568  127145 logs.go:123] Gathering logs for kubelet ...
	I1101 23:14:02.237581  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:14:02.300252  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441    4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.300464  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486    4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.300712  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778    4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.300934  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.134833    4572 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.301104  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.135478    4572 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.301295  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.135507    4572 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.302724  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.043911    4572 projected.go:192] Error preparing data for projected volume kube-api-access-mxxnh for pod kube-system/kindnet-55wll: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.303262  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044015    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh podName:18a63bc3-b29d-45a5-98a8-3f37cfef3c7b nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.043985609 +0000 UTC m=+12.036634856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mxxnh" (UniqueName: "kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh") pod "kindnet-55wll" (UID: "18a63bc3-b29d-45a5-98a8-3f37cfef3c7b") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out wait
ing for the condition]
	W1101 23:14:02.303497  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044035    4572 projected.go:192] Error preparing data for projected volume kube-api-access-k9mj5 for pod kube-system/kube-proxy-mprfx: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.303931  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044128    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5 podName:c323cc25-2fa6-4edf-b36c-03da66892a50 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.04409823 +0000 UTC m=+12.036747482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-k9mj5" (UniqueName: "kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5") pod "kube-proxy-mprfx" (UID: "c323cc25-2fa6-4edf-b36c-03da66892a50") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out
waiting for the condition]
	W1101 23:14:02.304244  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122285    4572 projected.go:192] Error preparing data for projected volume kube-api-access-wfqx2 for pod kube-system/storage-provisioner: [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.304666  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122380    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2 podName:2eb4b78f-b029-431c-a5b6-34253c21c6ae nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.122350449 +0000 UTC m=+12.114999680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wfqx2" (UniqueName: "kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2") pod "storage-provisioner" (UID: "2eb4b78f-b029-431c-a5b6-34253c21c6ae") : [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cac
he: timed out waiting for the condition]
	W1101 23:14:02.305088  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136572    4572 projected.go:192] Error preparing data for projected volume kube-api-access-2k56t for pod kube-system/coredns-6d4b75cb6d-r4qft: [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.305507  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136676    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t podName:93ea1e43-1509-4751-a91c-ee8a9f43f870 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:54.136638953 +0000 UTC m=+11.129288201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2k56t" (UniqueName: "kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t") pod "coredns-6d4b75cb6d-r4qft" (UID: "93ea1e43-1509-4751-a91c-ee8a9f43f870") : [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: tim
ed out waiting for the condition]
	I1101 23:14:02.328158  127145 logs.go:123] Gathering logs for dmesg ...
	I1101 23:14:02.328187  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:14:02.342140  127145 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:14:02.342171  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:14:02.477646  127145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:14:02.477672  127145 logs.go:123] Gathering logs for containerd ...
	I1101 23:14:02.477684  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:14:02.532567  127145 logs.go:123] Gathering logs for container status ...
	I1101 23:14:02.532606  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 23:14:02.557929  127145 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W1101 23:14:02.557965  127145 out.go:239] * 
	* 
	W1101 23:14:02.558080  127145 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:14:02.558101  127145 out.go:239] * 
	* 
	W1101 23:14:02.558873  127145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 23:14:02.561381  127145 out.go:177] X Problems detected in kubelet:
	I1101 23:14:02.562697  127145 out.go:177]   Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441    4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	I1101 23:14:02.564125  127145 out.go:177]   Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486    4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	I1101 23:14:02.565464  127145 out.go:177]   Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778    4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	I1101 23:14:02.568183  127145 out.go:177] 
	W1101 23:14:02.569498  127145 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:14:02.569611  127145 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1101 23:14:02.569659  127145 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1101 23:14:02.571762  127145 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2022-11-01 23:14:02.608867734 +0000 UTC m=+1751.745324176
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-230809
helpers_test.go:235: (dbg) docker inspect test-preload-230809:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d",
	        "Created": "2022-11-01T23:08:11.051243831Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 123958,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:08:11.72901206Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/hosts",
	        "LogPath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d-json.log",
	        "Name": "/test-preload-230809",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-230809:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-230809",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6-init/diff:/var/lib/docker/overlay2/3304d2e292dd827b741fa7e7dfa0dd06c735a2abf2639025717eb96733168a33/diff:/var/lib/docker/overlay2/f66a2ec830111a507a160d2f7f58d1ab0df8159096f23d5da74ca81116f032a4/diff:/var/lib/docker/overlay2/58562370bf5535a09b5f3ac667ae66ace0239a84b1724c693027cd984380e69d/diff:/var/lib/docker/overlay2/ad70e4fabb7d3b3f908814730456a6f69256cb5bf3f6281cf2e1de2d9ad6e620/diff:/var/lib/docker/overlay2/372e614731843da3a6a8586e11682dd7031ded66b212170eab90ed3974b91656/diff:/var/lib/docker/overlay2/0d5e9529a6b310e7de135cb901fad0589f42c74f315a8d227b3f1058a0635d3a/diff:/var/lib/docker/overlay2/68e9f113391c7a1cb7cf63712d04a796653c1b7efd904081fd8696e3142066cb/diff:/var/lib/docker/overlay2/25d5a308de1516fe45d18cc8d3b35ae4e3de5999ad6bffc678475b1fa74ce54c/diff:/var/lib/docker/overlay2/4fbedef0e02e22b00c09b167edef3a01d1baaa6ae2581ce1816acceb7b82904f/diff:/var/lib/docker/overlay2/237634
e28f08af84128abf2ca5885d71bf5f916d63c6088eb178b0729931f43f/diff:/var/lib/docker/overlay2/c1e44e9be7cdbbc0eecc5b798955e90ab62ff8e89d859ab692d424b63f8db9a1/diff:/var/lib/docker/overlay2/945c70a7d8c420004bb39705628a454a575ae067a91da51362818da5f64779bc/diff:/var/lib/docker/overlay2/ed05d73c801ea52b22e058a7fa685c4412453d8e5f0af711d6c43dc75ea9f082/diff:/var/lib/docker/overlay2/4f5b59c087860f39c4b24105ac4677a11a5167aec2093628c48e263d18b25f68/diff:/var/lib/docker/overlay2/5535048bf0d8af7ed100e4121cd2d5d8b776a0155a6edccc3bea22e753d8597b/diff:/var/lib/docker/overlay2/51c67944173d540bb52c33e409e2cfb8d381dc5a649d02e5599384faf4caa6ff/diff:/var/lib/docker/overlay2/5a530f1cc647ab6a7e5fbe252ffbfada764bc01fee20f5f70ad2ebe08b60c7c5/diff:/var/lib/docker/overlay2/d4472d58828ae545a5beec970f632730af916c03aea959ec3ec7d64a0579b1ea/diff:/var/lib/docker/overlay2/6b823f45daca0146f21cbfbe06e22b48fd5bf7fcf086765dde5c36cc5ae90aed/diff:/var/lib/docker/overlay2/54b88f4723cfc7221b7f0789d171797ed1328bd24d62508bfa456753f3e5c2bc/diff:/var/lib/d
ocker/overlay2/44599d073f725ff40c4736e9287865ef0372f691d010db33ba7bf69574f74aca/diff:/var/lib/docker/overlay2/68defae06f1c119684bbec2cd0b360da76b8ab455d9a617b0b16ea22bd3617c5/diff:/var/lib/docker/overlay2/2dd86bf6ab6202500623423a11736ce7c2c96ebe5d83bb039f44f0d4981510b4/diff:/var/lib/docker/overlay2/335010880e7bbb7689d4210cb09578047fa8d34b6ded18dcc4d3d5a6cc4287fb/diff:/var/lib/docker/overlay2/d73ca7e5b5a047dfc79343e02709bae69f2414aaed6f2830edbd022af4e1e145/diff:/var/lib/docker/overlay2/dae580a357bf83dff3b3b546fb9cda97e6511f710c236784c68ce84657fb0337/diff:/var/lib/docker/overlay2/1842e3044746991dda288e11a2bee8a8857d749595d769968b661a0994c25215/diff:/var/lib/docker/overlay2/3fba19b5de3fbb9f62126949163b914e6dd8efdb65c12afd6e6d56214581b8a6/diff:/var/lib/docker/overlay2/6ec508232bae92f0262e74463db095e79b446d6658a903f74d6d9275dae17d55/diff:/var/lib/docker/overlay2/653b5d92bafd148a58b3febd568fb54d9ba1f3b109cac8e277d5177a216868c1/diff:/var/lib/docker/overlay2/5fb2dc662190229810bebc6d79e918be90b416edb8ee1e20e951e803195
3d813/diff:/var/lib/docker/overlay2/6484c79c5b005c0d8eef871cad9010368b5332e697cb3a01cc7cc94bfed33376/diff:/var/lib/docker/overlay2/81e5b96e2d4c2697e1c6962beb6e71da710754f42e32a941f732c4efab850973/diff:/var/lib/docker/overlay2/85036ccfe63574469e3678df6445e614574f07f77c334997fac7f3ee217f5c54/diff:/var/lib/docker/overlay2/7ff8315528872300329fdbd17f11d0ea04ab7c7778244a12bc621ae84f12cf77/diff:/var/lib/docker/overlay2/c32e188bd4ec64d8f716b7885ce228c89a3c4f2777d3e33ed448911d38ceba55/diff:/var/lib/docker/overlay2/142e8c88931b6205839c329cc5ab1f40b06e30f547860d743f6d571c95a75b91/diff:/var/lib/docker/overlay2/21f148a35621027811131428e59ec3709b661b2a56e8ebfee2a95b3cdfb407e7/diff:/var/lib/docker/overlay2/9111530a9968c33f38dab8aebccd5d93acbd8d331124b7d12a0da63f86ae5768/diff:/var/lib/docker/overlay2/59aee9dd537a039e02b73dce312bf35f6cd3d34146c96208a1461e4c82a284ca/diff:/var/lib/docker/overlay2/3e4cb9f6fecb0597fc001ef0ad000a46fd7410c70475a6e8d6fb98e6d5c4f42a/diff:/var/lib/docker/overlay2/90181e6f161e52f087dda33985e81570a08027
27ab8282224c85a24bea25782e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-230809",
	                "Source": "/var/lib/docker/volumes/test-preload-230809/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-230809",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-230809",
	                "name.minikube.sigs.k8s.io": "test-preload-230809",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f41d9d697caf40359c40c070db997896587378c62f9b32141291ebfef9d888f7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49277"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49276"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49273"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49275"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49274"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f41d9d697caf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-230809": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1b57f8fa7ffe",
	                        "test-preload-230809"
	                    ],
	                    "NetworkID": "ef9d1cae9ccd2ec6eccff63562d3f31087cc5f69489a45cf0405ab8b12bd43b5",
	                    "EndpointID": "cfc33f91e4454c4b1ad1c7fe93f0b9346d14231539f2daad0e814e7356820704",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-230809 -n test-preload-230809
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-230809 -n test-preload-230809: exit status 2 (335.778728ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-230809 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-225952 ssh -n                                                                 | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | multinode-225952-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt                       | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | multinode-225952:/home/docker/cp-test_multinode-225952-m03_multinode-225952.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-225952 ssh -n                                                                 | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | multinode-225952-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-225952 ssh -n multinode-225952 sudo cat                                       | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | /home/docker/cp-test_multinode-225952-m03_multinode-225952.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt                       | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | multinode-225952-m02:/home/docker/cp-test_multinode-225952-m03_multinode-225952-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-225952 ssh -n                                                                 | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | multinode-225952-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-225952 ssh -n multinode-225952-m02 sudo cat                                   | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | /home/docker/cp-test_multinode-225952-m03_multinode-225952-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-225952 node stop m03                                                          | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	| node    | multinode-225952 node start                                                             | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-225952                                                                | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC |                     |
	| stop    | -p multinode-225952                                                                     | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:03 UTC |
	| start   | -p multinode-225952                                                                     | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:03 UTC | 01 Nov 22 23:05 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-225952                                                                | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC |                     |
	| node    | multinode-225952 node delete                                                            | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-225952 stop                                                                   | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:06 UTC |
	| start   | -p multinode-225952                                                                     | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:06 UTC | 01 Nov 22 23:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-225952                                                                | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:07 UTC |                     |
	| start   | -p multinode-225952-m02                                                                 | multinode-225952-m02 | jenkins | v1.27.1 | 01 Nov 22 23:07 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-225952-m03                                                                 | multinode-225952-m03 | jenkins | v1.27.1 | 01 Nov 22 23:07 UTC | 01 Nov 22 23:08 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-225952                                                                 | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC |                     |
	| delete  | -p multinode-225952-m03                                                                 | multinode-225952-m03 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
	| delete  | -p multinode-225952                                                                     | multinode-225952     | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
	| start   | -p test-preload-230809                                                                  | test-preload-230809  | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:09 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-230809                                                                  | test-preload-230809  | jenkins | v1.27.1 | 01 Nov 22 23:09 UTC | 01 Nov 22 23:09 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-230809                                                                  | test-preload-230809  | jenkins | v1.27.1 | 01 Nov 22 23:09 UTC |                     |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 23:09:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 23:09:02.101256  127145 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:09:02.101369  127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:09:02.101380  127145 out.go:309] Setting ErrFile to fd 2...
	I1101 23:09:02.101385  127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:09:02.101473  127145 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:09:02.101987  127145 out.go:303] Setting JSON to false
	I1101 23:09:02.102936  127145 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3088,"bootTime":1667341054,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 23:09:02.102996  127145 start.go:126] virtualization: kvm guest
	I1101 23:09:02.105803  127145 out.go:177] * [test-preload-230809] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 23:09:02.107347  127145 notify.go:220] Checking for updates...
	I1101 23:09:02.108879  127145 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 23:09:02.110538  127145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 23:09:02.112123  127145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:09:02.113662  127145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 23:09:02.115184  127145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 23:09:02.116881  127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1101 23:09:02.118764  127145 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1101 23:09:02.120144  127145 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 23:09:02.148923  127145 docker.go:137] docker version: linux-20.10.21
	I1101 23:09:02.149004  127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:09:02.241848  127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.16794253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:09:02.241979  127145 docker.go:254] overlay module found
	I1101 23:09:02.245118  127145 out.go:177] * Using the docker driver based on existing profile
	I1101 23:09:02.246572  127145 start.go:282] selected driver: docker
	I1101 23:09:02.246590  127145 start.go:808] validating driver "docker" against &{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-230809 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:09:02.246667  127145 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 23:09:02.247466  127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:09:02.338554  127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.266470239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:09:02.338791  127145 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 23:09:02.338813  127145 cni.go:95] Creating CNI manager for ""
	I1101 23:09:02.338820  127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:09:02.338831  127145 start_flags.go:317] config:
	{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:09:02.341335  127145 out.go:177] * Starting control plane node test-preload-230809 in cluster test-preload-230809
	I1101 23:09:02.342819  127145 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 23:09:02.344289  127145 out.go:177] * Pulling base image ...
	I1101 23:09:02.345773  127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1101 23:09:02.345854  127145 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 23:09:02.367470  127145 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 23:09:02.367494  127145 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 23:09:02.456956  127145 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1101 23:09:02.456979  127145 cache.go:57] Caching tarball of preloaded images
	I1101 23:09:02.457299  127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1101 23:09:02.459387  127145 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1101 23:09:02.460985  127145 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1101 23:09:02.574127  127145 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1101 23:09:07.458996  127145 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1101 23:09:07.459100  127145 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1101 23:09:08.389256  127145 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1101 23:09:08.389384  127145 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/config.json ...
	I1101 23:09:08.389578  127145 cache.go:208] Successfully downloaded all kic artifacts
	I1101 23:09:08.389617  127145 start.go:364] acquiring machines lock for test-preload-230809: {Name:mke051021b2965b04875f4fe9250ee1fc48098e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 23:09:08.389726  127145 start.go:368] acquired machines lock for "test-preload-230809" in 76.094µs
	I1101 23:09:08.389751  127145 start.go:96] Skipping create...Using existing machine configuration
	I1101 23:09:08.389762  127145 fix.go:55] fixHost starting: 
	I1101 23:09:08.390003  127145 cli_runner.go:164] Run: docker container inspect test-preload-230809 --format={{.State.Status}}
	I1101 23:09:08.411982  127145 fix.go:103] recreateIfNeeded on test-preload-230809: state=Running err=<nil>
	W1101 23:09:08.412027  127145 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 23:09:08.414797  127145 out.go:177] * Updating the running docker "test-preload-230809" container ...
	I1101 23:09:08.416264  127145 machine.go:88] provisioning docker machine ...
	I1101 23:09:08.416295  127145 ubuntu.go:169] provisioning hostname "test-preload-230809"
	I1101 23:09:08.416338  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:08.439734  127145 main.go:134] libmachine: Using SSH client type: native
	I1101 23:09:08.440024  127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1101 23:09:08.440069  127145 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-230809 && echo "test-preload-230809" | sudo tee /etc/hostname
	I1101 23:09:08.562938  127145 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-230809
	
	I1101 23:09:08.563010  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:08.585385  127145 main.go:134] libmachine: Using SSH client type: native
	I1101 23:09:08.585561  127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49277 <nil> <nil>}
	I1101 23:09:08.585590  127145 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-230809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-230809/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-230809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 23:09:08.698901  127145 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 23:09:08.698934  127145 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
	I1101 23:09:08.698966  127145 ubuntu.go:177] setting up certificates
	I1101 23:09:08.698978  127145 provision.go:83] configureAuth start
	I1101 23:09:08.699037  127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
	I1101 23:09:08.721518  127145 provision.go:138] copyHostCerts
	I1101 23:09:08.721585  127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
	I1101 23:09:08.721599  127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
	I1101 23:09:08.721689  127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
	I1101 23:09:08.721805  127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
	I1101 23:09:08.721820  127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
	I1101 23:09:08.721860  127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
	I1101 23:09:08.721933  127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
	I1101 23:09:08.721947  127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
	I1101 23:09:08.721984  127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
	I1101 23:09:08.722065  127145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.test-preload-230809 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-230809]
	I1101 23:09:09.342668  127145 provision.go:172] copyRemoteCerts
	I1101 23:09:09.342737  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 23:09:09.342788  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.365869  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.450803  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 23:09:09.467332  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 23:09:09.484069  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 23:09:09.500288  127145 provision.go:86] duration metric: configureAuth took 801.291693ms
	I1101 23:09:09.500314  127145 ubuntu.go:193] setting minikube options for container-runtime
	I1101 23:09:09.500489  127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1101 23:09:09.500504  127145 machine.go:91] provisioned docker machine in 1.084227489s
	I1101 23:09:09.500512  127145 start.go:300] post-start starting for "test-preload-230809" (driver="docker")
	I1101 23:09:09.500518  127145 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 23:09:09.500574  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 23:09:09.500612  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.523524  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.606420  127145 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 23:09:09.608955  127145 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 23:09:09.608997  127145 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 23:09:09.609008  127145 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 23:09:09.609014  127145 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 23:09:09.609026  127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
	I1101 23:09:09.609074  127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
	I1101 23:09:09.609141  127145 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
	I1101 23:09:09.609211  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 23:09:09.615422  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:09:09.632348  127145 start.go:303] post-start completed in 131.826095ms
	I1101 23:09:09.632431  127145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:09:09.632484  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.655572  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.739833  127145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 23:09:09.743685  127145 fix.go:57] fixHost completed within 1.353918347s
	I1101 23:09:09.743711  127145 start.go:83] releasing machines lock for "test-preload-230809", held for 1.353965858s
	I1101 23:09:09.743793  127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
	I1101 23:09:09.766548  127145 ssh_runner.go:195] Run: systemctl --version
	I1101 23:09:09.766597  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.766663  127145 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1101 23:09:09.766716  127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
	I1101 23:09:09.792264  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.792322  127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
	I1101 23:09:09.888741  127145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 23:09:09.898412  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 23:09:09.907129  127145 docker.go:189] disabling docker service ...
	I1101 23:09:09.907178  127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 23:09:09.916127  127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 23:09:09.924535  127145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 23:09:10.021637  127145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 23:09:10.121893  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 23:09:10.130949  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 23:09:10.143348  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1101 23:09:10.150803  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1101 23:09:10.158084  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1101 23:09:10.165427  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1101 23:09:10.172620  127145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 23:09:10.178500  127145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 23:09:10.184228  127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:09:10.274591  127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:09:10.352393  127145 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 23:09:10.352463  127145 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 23:09:10.357122  127145 start.go:472] Will wait 60s for crictl version
	I1101 23:09:10.357191  127145 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:09:10.392488  127145 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-01T23:09:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1101 23:09:21.439528  127145 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:09:21.462449  127145 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1101 23:09:21.462510  127145 ssh_runner.go:195] Run: containerd --version
	I1101 23:09:21.484971  127145 ssh_runner.go:195] Run: containerd --version
	I1101 23:09:21.509013  127145 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I1101 23:09:21.510580  127145 cli_runner.go:164] Run: docker network inspect test-preload-230809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:09:21.532621  127145 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1101 23:09:21.536061  127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1101 23:09:21.536135  127145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:09:21.558771  127145 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1101 23:09:21.558833  127145 ssh_runner.go:195] Run: which lz4
	I1101 23:09:21.561739  127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 23:09:21.564671  127145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1101 23:09:21.564695  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1101 23:09:22.512481  127145 containerd.go:496] Took 0.950765 seconds to copy over tarball
	I1101 23:09:22.512539  127145 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 23:09:25.309553  127145 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.796992099s)
	I1101 23:09:25.309668  127145 containerd.go:503] Took 2.797150 seconds t extract the tarball
	I1101 23:09:25.309687  127145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 23:09:25.324395  127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:09:25.422371  127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:09:25.510170  127145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:09:25.538232  127145 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 23:09:25.538307  127145 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:25.538343  127145 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:25.538380  127145 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:25.538401  127145 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:25.538410  127145 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I1101 23:09:25.538365  127145 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:25.538347  127145 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:25.538380  127145 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:25.539377  127145 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:25.539486  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:25.539520  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:25.539552  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:25.539747  127145 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I1101 23:09:25.540025  127145 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:25.540223  127145 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:25.540448  127145 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:25.987285  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I1101 23:09:25.999857  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I1101 23:09:26.002925  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
	I1101 23:09:26.009305  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
	I1101 23:09:26.050246  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I1101 23:09:26.065466  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
	I1101 23:09:26.075511  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
	I1101 23:09:26.363138  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 23:09:26.825611  127145 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1101 23:09:26.825704  127145 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I1101 23:09:26.825763  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:26.922091  127145 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1101 23:09:26.922201  127145 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:26.922266  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:26.935023  127145 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
	I1101 23:09:26.935049  127145 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
	I1101 23:09:26.935073  127145 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:26.935157  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:26.935073  127145 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:26.935237  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.033281  127145 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1101 23:09:27.033386  127145 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:27.033448  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.118607  127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.053106276s)
	I1101 23:09:27.197931  127145 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
	I1101 23:09:27.118727  127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.043182812s)
	I1101 23:09:27.145553  127145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 23:09:27.198012  127145 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:27.198041  127145 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:27.198067  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.198114  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:27.145664  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I1101 23:09:27.145702  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I1101 23:09:27.145736  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
	I1101 23:09:27.145736  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
	I1101 23:09:27.145776  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I1101 23:09:27.197981  127145 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
	I1101 23:09:27.198282  127145 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:27.198319  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:28.633346  127145 ssh_runner.go:235] Completed: which crictl: (1.435002706s)
	I1101 23:09:28.633407  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
	I1101 23:09:28.633499  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.435244347s)
	I1101 23:09:28.633520  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
	I1101 23:09:28.633558  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.435295917s)
	I1101 23:09:28.633570  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I1101 23:09:28.633630  127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1101 23:09:28.633718  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.435492576s)
	I1101 23:09:28.633737  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I1101 23:09:28.633801  127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1101 23:09:28.633883  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.435647522s)
	I1101 23:09:28.633895  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
	I1101 23:09:28.633934  127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.43573031s)
	I1101 23:09:28.633961  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I1101 23:09:28.633997  127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1101 23:09:28.634036  127145 ssh_runner.go:235] Completed: which crictl: (1.435871833s)
	I1101 23:09:28.634053  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:09:28.634098  127145 ssh_runner.go:235] Completed: which crictl: (1.436023391s)
	I1101 23:09:28.634122  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
	I1101 23:09:28.778449  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 23:09:28.778478  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
	I1101 23:09:28.778546  127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:09:28.778569  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1101 23:09:28.778584  127145 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1101 23:09:28.778593  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1101 23:09:28.778618  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I1101 23:09:28.778652  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1101 23:09:28.779903  127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
	I1101 23:09:28.781996  127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1101 23:09:36.182104  127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.403463536s)
	I1101 23:09:36.182144  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I1101 23:09:36.182176  127145 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1101 23:09:36.182237  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I1101 23:09:38.315093  127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (2.132819455s)
	I1101 23:09:38.315128  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I1101 23:09:38.315167  127145 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
	I1101 23:09:38.315245  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I1101 23:09:38.532314  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I1101 23:09:38.532357  127145 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:09:38.532411  127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:09:39.739922  127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.207479048s)
	I1101 23:09:39.739955  127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 23:09:39.740004  127145 cache_images.go:92] LoadImages completed in 14.201748543s
	W1101 23:09:39.740191  127145 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6: no such file or directory
	I1101 23:09:39.740259  127145 ssh_runner.go:195] Run: sudo crictl info
	I1101 23:09:39.816714  127145 cni.go:95] Creating CNI manager for ""
	I1101 23:09:39.816751  127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:09:39.816770  127145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 23:09:39.816787  127145 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-230809 NodeName:test-preload-230809 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 23:09:39.816973  127145 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-230809"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 23:09:39.817109  127145 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-230809 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 23:09:39.817179  127145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1101 23:09:39.826621  127145 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 23:09:39.826677  127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 23:09:39.835648  127145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
	I1101 23:09:39.916772  127145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 23:09:39.932259  127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I1101 23:09:39.947304  127145 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1101 23:09:39.950835  127145 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809 for IP: 192.168.67.2
	I1101 23:09:39.950959  127145 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
	I1101 23:09:39.951010  127145 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
	I1101 23:09:39.951103  127145 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key
	I1101 23:09:39.951220  127145 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key.c7fa3a9e
	I1101 23:09:39.951278  127145 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key
	I1101 23:09:39.951418  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
	W1101 23:09:39.951461  127145 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
	I1101 23:09:39.951476  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 23:09:39.951510  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
	I1101 23:09:39.951551  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
	I1101 23:09:39.951584  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
	I1101 23:09:39.951640  127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:09:39.952459  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 23:09:40.018330  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 23:09:40.038985  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 23:09:40.059337  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 23:09:40.127519  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 23:09:40.147768  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 23:09:40.216763  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 23:09:40.238171  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 23:09:40.265559  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
	I1101 23:09:40.332847  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
	I1101 23:09:40.354317  127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 23:09:40.414264  127145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 23:09:40.430591  127145 ssh_runner.go:195] Run: openssl version
	I1101 23:09:40.436602  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
	I1101 23:09:40.445840  127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
	I1101 23:09:40.449377  127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:50 /usr/share/ca-certificates/12840.pem
	I1101 23:09:40.449430  127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
	I1101 23:09:40.456569  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
	I1101 23:09:40.464390  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
	I1101 23:09:40.514612  127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
	I1101 23:09:40.518320  127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:50 /usr/share/ca-certificates/128402.pem
	I1101 23:09:40.518385  127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
	I1101 23:09:40.524764  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 23:09:40.533275  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 23:09:40.542165  127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:09:40.545871  127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:09:40.545917  127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:09:40.550867  127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 23:09:40.558550  127145 kubeadm.go:396] StartCluster: {Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:09:40.558652  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 23:09:40.558703  127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:09:40.637065  127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
	I1101 23:09:40.637096  127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
	I1101 23:09:40.637108  127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
	I1101 23:09:40.637121  127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
	I1101 23:09:40.637131  127145 cri.go:87] found id: ""
	I1101 23:09:40.637166  127145 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1101 23:09:40.735629  127145 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5/rootfs","created":"2022-11-01T23:08:58.356227997Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","pid":2147,"st
atus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1/rootfs","created":"2022-11-01T23:08:50.712751348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","pid":1508,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424/rootfs","created":"2022-11-01T23:08:30.466593305Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","pid":2189,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62/rootfs","created":"2022-11-01T23:08:50.775829242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","pid":1631,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468/rootfs","created":"2022-11-01T23:08:30.715212813Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994/rootfs","created":"2022-11-01T23:08:50.930366595Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","pid":3276,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931/rootfs","created":"2022-11-01T23:09:28.020513803Z","annotations":{"io.kubernetes.cri.container-type":"sandbox",
"io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","pid":2566,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45/rootfs","created":"2022-11-01T23:08:58.223128026Z","annotations":{"io.kubernetes.cri.conta
iner-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","pid":3285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1/rootfs","created":"2022-11-01T23:09:28.02269692Z","annotations":{"io.kubernet
es.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","pid":3536,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463/rootfs","created":"2022-11-01T23:09:29.
630532491Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","pid":2426,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be/rootfs","created":
"2022-11-01T23:08:54.212636774Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","pid":1503,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8/rootfs","created":"2022-11-01T23:08:30.4665045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","
io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","pid":3584,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05/rootfs","created":"2022-11-01T23:09:29.729675697Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.san
dbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","pid":1507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad/rootfs","created":"2022-11-01T23:08:30.46654145Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubern
etes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6/rootfs","created":"2022-11-01T23:08:58.356220401Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"c
ontainer","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","pid":1630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a/rootfs","created":"2022-11-01T23:08:30.715566758Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.k
ubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16/rootfs","created":"2022-11-01T23:08:30.71207489Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","pid":3660,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8/rootfs","created":"2022-11-01T23:09:31.863802538Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","pid":3466,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18
151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7/rootfs","created":"2022-11-01T23:09:29.524514538Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","pid":1504,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69
909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f/rootfs","created":"2022-11-01T23:08:30.466601473Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","pid":1632,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993/rootfs","created":"2022-11-01T23:08:30.715174165Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","pid":3538,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a
524265b0003fa3f0aa/rootfs","created":"2022-11-01T23:09:29.63434432Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","pid":3546,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460
b949272bba5/rootfs","created":"2022-11-01T23:09:29.633496847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","pid":3283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d
9cce/rootfs","created":"2022-11-01T23:09:28.022341914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1/rootfs",
"created":"2022-11-01T23:08:58.221992861Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1101 23:09:40.736083  127145 cri.go:124] list returned 25 containers
	I1101 23:09:40.736101  127145 cri.go:127] container: {ID:12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 Status:running}
	I1101 23:09:40.736119  127145 cri.go:129] skipping 12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 - not in ps
	I1101 23:09:40.736130  127145 cri.go:127] container: {ID:25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 Status:running}
	I1101 23:09:40.736144  127145 cri.go:129] skipping 25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 - not in ps
	I1101 23:09:40.736156  127145 cri.go:127] container: {ID:4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 Status:running}
	I1101 23:09:40.736169  127145 cri.go:129] skipping 4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 - not in ps
	I1101 23:09:40.736180  127145 cri.go:127] container: {ID:57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 Status:running}
	I1101 23:09:40.736192  127145 cri.go:129] skipping 57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 - not in ps
	I1101 23:09:40.736204  127145 cri.go:127] container: {ID:6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 Status:running}
	I1101 23:09:40.736221  127145 cri.go:129] skipping 6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 - not in ps
	I1101 23:09:40.736232  127145 cri.go:127] container: {ID:7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 Status:running}
	I1101 23:09:40.736240  127145 cri.go:129] skipping 7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 - not in ps
	I1101 23:09:40.736246  127145 cri.go:127] container: {ID:84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 Status:running}
	I1101 23:09:40.736255  127145 cri.go:129] skipping 84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 - not in ps
	I1101 23:09:40.736266  127145 cri.go:127] container: {ID:8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 Status:running}
	I1101 23:09:40.736278  127145 cri.go:129] skipping 8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 - not in ps
	I1101 23:09:40.736289  127145 cri.go:127] container: {ID:969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 Status:running}
	I1101 23:09:40.736300  127145 cri.go:129] skipping 969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 - not in ps
	I1101 23:09:40.736305  127145 cri.go:127] container: {ID:9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 Status:running}
	I1101 23:09:40.736313  127145 cri.go:129] skipping 9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 - not in ps
	I1101 23:09:40.736320  127145 cri.go:127] container: {ID:9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be Status:running}
	I1101 23:09:40.736333  127145 cri.go:129] skipping 9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be - not in ps
	I1101 23:09:40.736343  127145 cri.go:127] container: {ID:bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 Status:running}
	I1101 23:09:40.736355  127145 cri.go:129] skipping bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 - not in ps
	I1101 23:09:40.736366  127145 cri.go:127] container: {ID:c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 Status:running}
	I1101 23:09:40.736378  127145 cri.go:129] skipping c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 - not in ps
	I1101 23:09:40.736388  127145 cri.go:127] container: {ID:cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad Status:running}
	I1101 23:09:40.736397  127145 cri.go:129] skipping cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad - not in ps
	I1101 23:09:40.736411  127145 cri.go:127] container: {ID:cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 Status:running}
	I1101 23:09:40.736429  127145 cri.go:129] skipping cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 - not in ps
	I1101 23:09:40.736440  127145 cri.go:127] container: {ID:da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a Status:running}
	I1101 23:09:40.736458  127145 cri.go:129] skipping da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a - not in ps
	I1101 23:09:40.736470  127145 cri.go:127] container: {ID:dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 Status:running}
	I1101 23:09:40.736483  127145 cri.go:129] skipping dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 - not in ps
	I1101 23:09:40.736493  127145 cri.go:127] container: {ID:dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 Status:running}
	I1101 23:09:40.736502  127145 cri.go:133] skipping {dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 running}: state = "running", want "paused"
	I1101 23:09:40.736517  127145 cri.go:127] container: {ID:dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 Status:running}
	I1101 23:09:40.736530  127145 cri.go:129] skipping dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 - not in ps
	I1101 23:09:40.736541  127145 cri.go:127] container: {ID:e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f Status:running}
	I1101 23:09:40.736553  127145 cri.go:129] skipping e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f - not in ps
	I1101 23:09:40.736564  127145 cri.go:127] container: {ID:e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 Status:running}
	I1101 23:09:40.736576  127145 cri.go:129] skipping e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 - not in ps
	I1101 23:09:40.736586  127145 cri.go:127] container: {ID:ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa Status:running}
	I1101 23:09:40.736594  127145 cri.go:129] skipping ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa - not in ps
	I1101 23:09:40.736603  127145 cri.go:127] container: {ID:f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 Status:running}
	I1101 23:09:40.736615  127145 cri.go:129] skipping f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 - not in ps
	I1101 23:09:40.736625  127145 cri.go:127] container: {ID:f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce Status:running}
	I1101 23:09:40.736636  127145 cri.go:129] skipping f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce - not in ps
	I1101 23:09:40.736643  127145 cri.go:127] container: {ID:f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 Status:running}
	I1101 23:09:40.736658  127145 cri.go:129] skipping f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 - not in ps
	I1101 23:09:40.736704  127145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 23:09:40.745646  127145 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 23:09:40.745673  127145 kubeadm.go:627] restartCluster start
	I1101 23:09:40.745722  127145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 23:09:40.753726  127145 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:09:40.754368  127145 kubeconfig.go:92] found "test-preload-230809" server: "https://192.168.67.2:8443"
	I1101 23:09:40.755237  127145 kapi.go:59] client config for test-preload-230809: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key", CAFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 23:09:40.755875  127145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 23:09:40.763523  127145 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-11-01 23:08:26.955661256 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-11-01 23:09:39.941360162 +0000
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1101 23:09:40.763543  127145 kubeadm.go:1114] stopping kube-system containers ...
	I1101 23:09:40.763556  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1101 23:09:40.763603  127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:09:40.843646  127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
	I1101 23:09:40.843681  127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
	I1101 23:09:40.843693  127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
	I1101 23:09:40.843703  127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
	I1101 23:09:40.843711  127145 cri.go:87] found id: ""
	I1101 23:09:40.843719  127145 cri.go:232] Stopping containers: [e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8]
	I1101 23:09:40.843770  127145 ssh_runner.go:195] Run: which crictl
	I1101 23:09:40.847856  127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8
	I1101 23:09:41.335259  127145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 23:09:41.402860  127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:09:41.410490  127145 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  1 23:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  1 23:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Nov  1 23:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  1 23:08 /etc/kubernetes/scheduler.conf
	
	I1101 23:09:41.410554  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 23:09:41.417229  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 23:09:41.423830  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 23:09:41.430364  127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:09:41.430410  127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 23:09:41.436788  127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 23:09:41.442864  127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:09:41.442915  127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 23:09:41.448988  127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:09:41.455288  127145 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 23:09:41.455307  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:41.753172  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:42.645331  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:43.006957  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:43.058116  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:43.137338  127145 api_server.go:51] waiting for apiserver process to appear ...
	I1101 23:09:43.137438  127145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:09:43.218088  127145 api_server.go:71] duration metric: took 80.740751ms to wait for apiserver process to appear ...
	I1101 23:09:43.218119  127145 api_server.go:87] waiting for apiserver healthz status ...
	I1101 23:09:43.218133  127145 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 23:09:43.223783  127145 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 23:09:43.231489  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:43.231532  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:43.733092  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:43.733125  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:44.233705  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:44.233731  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:44.733150  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:44.733179  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	I1101 23:09:45.233717  127145 api_server.go:140] control plane version: v1.24.4
	W1101 23:09:45.233749  127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
	W1101 23:09:45.732040  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:46.233010  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:46.732501  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:47.232636  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:47.732455  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:48.232934  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:48.732964  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	W1101 23:09:49.232994  127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
	I1101 23:09:52.022667  127145 api_server.go:140] control plane version: v1.24.6
	I1101 23:09:52.022755  127145 api_server.go:130] duration metric: took 8.804626822s to wait for apiserver health ...
	I1101 23:09:52.022776  127145 cni.go:95] Creating CNI manager for ""
	I1101 23:09:52.022793  127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:09:52.025189  127145 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 23:09:52.026860  127145 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 23:09:52.033655  127145 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
	I1101 23:09:52.033680  127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1101 23:09:52.223817  127145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 23:09:52.990696  127145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 23:09:52.997505  127145 system_pods.go:59] 8 kube-system pods found
	I1101 23:09:52.997541  127145 system_pods.go:61] "coredns-6d4b75cb6d-r4qft" [93ea1e43-1509-4751-a91c-ee8a9f43f870] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 23:09:52.997551  127145 system_pods.go:61] "etcd-test-preload-230809" [af6823c1-4191-4b7b-b864-c8d4dc5b60b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 23:09:52.997561  127145 system_pods.go:61] "kindnet-55wll" [18a63bc3-b29d-45a5-98a8-3f37cfef3c7b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 23:09:52.997568  127145 system_pods.go:61] "kube-apiserver-test-preload-230809" [7c4baec2-c5b0-4a19-b41f-c54723a6cb9d] Pending
	I1101 23:09:52.997578  127145 system_pods.go:61] "kube-controller-manager-test-preload-230809" [61a6d202-4552-4719-bfd5-7e9295cc25b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 23:09:52.997598  127145 system_pods.go:61] "kube-proxy-mprfx" [c323cc25-2fa6-4edf-b36c-03da66892a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 23:09:52.997611  127145 system_pods.go:61] "kube-scheduler-test-preload-230809" [ae2815cc-6736-4e49-b3c8-8abeaeeea1bd] Pending
	I1101 23:09:52.997623  127145 system_pods.go:61] "storage-provisioner" [2eb4b78f-b029-431c-a5b6-34253c21c6ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 23:09:52.997635  127145 system_pods.go:74] duration metric: took 6.918381ms to wait for pod list to return data ...
	I1101 23:09:52.997648  127145 node_conditions.go:102] verifying NodePressure condition ...
	I1101 23:09:52.999970  127145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 23:09:53.000003  127145 node_conditions.go:123] node cpu capacity is 8
	I1101 23:09:53.000015  127145 node_conditions.go:105] duration metric: took 2.358425ms to run NodePressure ...
	I1101 23:09:53.000039  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:09:53.234562  127145 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1101 23:09:53.237990  127145 kubeadm.go:778] kubelet initialised
	I1101 23:09:53.238014  127145 kubeadm.go:779] duration metric: took 3.422089ms waiting for restarted kubelet to initialise ...
	I1101 23:09:53.238022  127145 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:09:53.242529  127145 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
	I1101 23:09:55.254763  127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
	I1101 23:09:57.753901  127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
	I1101 23:09:59.754592  127145 pod_ready.go:92] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"True"
	I1101 23:09:59.754626  127145 pod_ready.go:81] duration metric: took 6.512068179s waiting for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
	I1101 23:09:59.754639  127145 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
	I1101 23:10:01.766834  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:04.264410  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:06.764726  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:09.264989  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:11.265205  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:13.763952  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:15.764164  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:17.764732  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:19.764997  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:22.264415  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:24.764449  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:27.264094  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:29.264748  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:31.764914  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:34.264280  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:36.264981  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:38.765185  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:41.265088  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:43.764636  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:46.265617  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:48.765111  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:51.264670  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:53.264916  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:55.264961  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:57.265052  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:10:59.764621  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:02.264841  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:04.264932  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:06.764687  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:09.265413  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:11.764819  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:13.765227  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:16.264738  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:18.265154  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:20.764475  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:22.765142  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:25.264490  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:27.265182  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:29.764395  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:31.764559  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:33.765136  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:36.264759  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:38.265094  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:40.764500  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:43.264843  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:45.765686  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:48.264476  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:50.764617  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:52.764701  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:54.765115  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:56.765316  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:11:59.264346  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:01.264372  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:03.264546  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:05.264956  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:07.764171  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:09.764397  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:11.765095  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:14.264701  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:16.265440  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:18.764276  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:20.764938  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:23.265330  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:25.764449  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:27.764895  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:30.265410  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:32.767373  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:35.265081  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:37.765063  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:40.265350  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:42.765270  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:45.265267  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:47.765107  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:50.265576  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:52.766477  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:55.264930  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:12:57.765153  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:00.264148  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:02.264609  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:04.265195  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:06.764397  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:08.765157  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:11.264073  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:13.264819  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:15.763483  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:17.763881  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:19.765072  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:21.765183  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:24.265085  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:26.764936  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:29.264520  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:31.265339  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:33.764859  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:36.265232  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:38.764507  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:40.764906  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:42.764962  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:44.765506  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:47.264257  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:49.265001  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:51.765200  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:54.264162  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:56.264864  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:58.764509  127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
	I1101 23:13:59.759267  127145 pod_ready.go:81] duration metric: took 4m0.004604004s waiting for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
	E1101 23:13:59.759292  127145 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" (will not retry!)
	I1101 23:13:59.759322  127145 pod_ready.go:38] duration metric: took 4m6.521288423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:13:59.759354  127145 kubeadm.go:631] restartCluster took 4m19.013673069s
	W1101 23:13:59.759521  127145 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 23:13:59.759560  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:14:01.430467  127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.670884606s)
	I1101 23:14:01.430528  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:14:01.440216  127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:14:01.447136  127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:14:01.447183  127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:14:01.453660  127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:14:01.453703  127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:14:01.491674  127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1101 23:14:01.491746  127145 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:14:01.518815  127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:14:01.518891  127145 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:14:01.518924  127145 kubeadm.go:317] OS: Linux
	I1101 23:14:01.519001  127145 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:14:01.519091  127145 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:14:01.519162  127145 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:14:01.519232  127145 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:14:01.519307  127145 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:14:01.519381  127145 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:14:01.519458  127145 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:14:01.519533  127145 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:14:01.519591  127145 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:14:01.591526  127145 kubeadm.go:317] W1101 23:14:01.486750    6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:14:01.591829  127145 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:14:01.591936  127145 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:14:01.592005  127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1101 23:14:01.592050  127145 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1101 23:14:01.592096  127145 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1101 23:14:01.592196  127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1101 23:14:01.592269  127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 23:14:01.592495  127145 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.486750    6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 23:14:01.592536  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:14:01.906961  127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:14:01.916443  127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:14:01.916504  127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:14:01.923130  127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:14:01.923166  127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:14:01.960923  127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
	I1101 23:14:01.960981  127145 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:14:01.987846  127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:14:01.987918  127145 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:14:01.987961  127145 kubeadm.go:317] OS: Linux
	I1101 23:14:01.988021  127145 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:14:01.988074  127145 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:14:01.988115  127145 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:14:01.988186  127145 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:14:01.988241  127145 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:14:01.988304  127145 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:14:01.988371  127145 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:14:01.988430  127145 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:14:01.988521  127145 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:14:02.056387  127145 kubeadm.go:317] W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:14:02.056585  127145 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:14:02.056677  127145 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:14:02.056739  127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
	I1101 23:14:02.056775  127145 kubeadm.go:317] 	[ERROR Port-2379]: Port 2379 is in use
	I1101 23:14:02.056811  127145 kubeadm.go:317] 	[ERROR Port-2380]: Port 2380 is in use
	I1101 23:14:02.056904  127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	I1101 23:14:02.057006  127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 23:14:02.057085  127145 kubeadm.go:398] StartCluster complete in 4m21.498557806s
	I1101 23:14:02.057126  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:14:02.057181  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:14:02.079779  127145 cri.go:87] found id: ""
	I1101 23:14:02.079803  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.079811  127145 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:14:02.079820  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:14:02.079867  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:14:02.102132  127145 cri.go:87] found id: ""
	I1101 23:14:02.103963  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.103974  127145 logs.go:276] No container was found matching "etcd"
	I1101 23:14:02.103987  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:14:02.104037  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:14:02.127250  127145 cri.go:87] found id: ""
	I1101 23:14:02.127271  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.127278  127145 logs.go:276] No container was found matching "coredns"
	I1101 23:14:02.127282  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:14:02.127329  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:14:02.149764  127145 cri.go:87] found id: ""
	I1101 23:14:02.149785  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.149792  127145 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:14:02.149799  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:14:02.149851  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:14:02.172459  127145 cri.go:87] found id: ""
	I1101 23:14:02.172482  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.172488  127145 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:14:02.172493  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:14:02.172532  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:14:02.194215  127145 cri.go:87] found id: ""
	I1101 23:14:02.194240  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.194246  127145 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:14:02.194252  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:14:02.194295  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:14:02.215924  127145 cri.go:87] found id: ""
	I1101 23:14:02.215945  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.215951  127145 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:14:02.215961  127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:14:02.216007  127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:14:02.237525  127145 cri.go:87] found id: ""
	I1101 23:14:02.237548  127145 logs.go:274] 0 containers: []
	W1101 23:14:02.237556  127145 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:14:02.237568  127145 logs.go:123] Gathering logs for kubelet ...
	I1101 23:14:02.237581  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:14:02.300252  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441    4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.300464  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486    4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.300712  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778    4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.300934  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.134833    4572 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.301104  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.135478    4572 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.301295  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.135507    4572 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	W1101 23:14:02.302724  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.043911    4572 projected.go:192] Error preparing data for projected volume kube-api-access-mxxnh for pod kube-system/kindnet-55wll: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.303262  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044015    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh podName:18a63bc3-b29d-45a5-98a8-3f37cfef3c7b nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.043985609 +0000 UTC m=+12.036634856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mxxnh" (UniqueName: "kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh") pod "kindnet-55wll" (UID: "18a63bc3-b29d-45a5-98a8-3f37cfef3c7b") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out wait
ing for the condition]
	W1101 23:14:02.303497  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044035    4572 projected.go:192] Error preparing data for projected volume kube-api-access-k9mj5 for pod kube-system/kube-proxy-mprfx: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.303931  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044128    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5 podName:c323cc25-2fa6-4edf-b36c-03da66892a50 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.04409823 +0000 UTC m=+12.036747482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-k9mj5" (UniqueName: "kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5") pod "kube-proxy-mprfx" (UID: "c323cc25-2fa6-4edf-b36c-03da66892a50") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out
waiting for the condition]
	W1101 23:14:02.304244  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122285    4572 projected.go:192] Error preparing data for projected volume kube-api-access-wfqx2 for pod kube-system/storage-provisioner: [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.304666  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122380    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2 podName:2eb4b78f-b029-431c-a5b6-34253c21c6ae nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.122350449 +0000 UTC m=+12.114999680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wfqx2" (UniqueName: "kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2") pod "storage-provisioner" (UID: "2eb4b78f-b029-431c-a5b6-34253c21c6ae") : [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cac
he: timed out waiting for the condition]
	W1101 23:14:02.305088  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136572    4572 projected.go:192] Error preparing data for projected volume kube-api-access-2k56t for pod kube-system/coredns-6d4b75cb6d-r4qft: [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W1101 23:14:02.305507  127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136676    4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t podName:93ea1e43-1509-4751-a91c-ee8a9f43f870 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:54.136638953 +0000 UTC m=+11.129288201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2k56t" (UniqueName: "kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t") pod "coredns-6d4b75cb6d-r4qft" (UID: "93ea1e43-1509-4751-a91c-ee8a9f43f870") : [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: tim
ed out waiting for the condition]
	I1101 23:14:02.328158  127145 logs.go:123] Gathering logs for dmesg ...
	I1101 23:14:02.328187  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:14:02.342140  127145 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:14:02.342171  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:14:02.477646  127145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:14:02.477672  127145 logs.go:123] Gathering logs for containerd ...
	I1101 23:14:02.477684  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:14:02.532567  127145 logs.go:123] Gathering logs for container status ...
	I1101 23:14:02.532606  127145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 23:14:02.557929  127145 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W1101 23:14:02.557965  127145 out.go:239] * 
	W1101 23:14:02.558080  127145 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:14:02.558101  127145 out.go:239] * 
	W1101 23:14:02.558873  127145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 23:14:02.561381  127145 out.go:177] X Problems detected in kubelet:
	I1101 23:14:02.562697  127145 out.go:177]   Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441    4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	I1101 23:14:02.564125  127145 out.go:177]   Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486    4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	I1101 23:14:02.565464  127145 out.go:177]   Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778    4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
	I1101 23:14:02.568183  127145 out.go:177] 
	W1101 23:14:02.569498  127145 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.6
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	
	stderr:
	W1101 23:14:01.956215    7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:14:02.569611  127145 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W1101 23:14:02.569659  127145 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I1101 23:14:02.571762  127145 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-11-01 23:08:12 UTC, end at Tue 2022-11-01 23:14:03 UTC. --
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.719525032Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.734560810Z" level=info msg="StopPodSandbox for \"this\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.734618346Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.749253336Z" level=info msg="StopPodSandbox for \"endpoint\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.749297038Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.763665423Z" level=info msg="StopPodSandbox for \"is\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.763703602Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.778694852Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.778747881Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.794360465Z" level=info msg="StopPodSandbox for \"please\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.794405615Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.810007645Z" level=info msg="StopPodSandbox for \"consider\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.810070144Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.825361791Z" level=info msg="StopPodSandbox for \"using\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.825415140Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.840372611Z" level=info msg="StopPodSandbox for \"full\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.840414789Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.856508587Z" level=info msg="StopPodSandbox for \"URL\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.856554561Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.870954124Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.871012126Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.886230057Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.886270252Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.902044244Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.902102673Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.007357] FS-Cache: O-key=[8] '8aa00f0200000000'
	[  +0.004958] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006585] FS-Cache: N-cookie d=00000000f5a48031{9p.inode} n=00000000f831f3cd
	[  +0.008739] FS-Cache: N-key=[8] '8aa00f0200000000'
	[  +0.461145] FS-Cache: Duplicate cookie detected
	[  +0.004704] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=00000000f5a48031{9p.inode} n=00000000be3fb01f
	[  +0.007375] FS-Cache: O-key=[8] '9ba00f0200000000'
	[  +0.004971] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007981] FS-Cache: N-cookie d=00000000f5a48031{9p.inode} n=0000000004318e07
	[  +0.008713] FS-Cache: N-key=[8] '9ba00f0200000000'
	[ +34.615849] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 1 23:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
	[  +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
	[  +1.007001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
	[  +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
	[  +2.015846] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
	[  +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
	[  +4.063673] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
	[  +0.000008] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
	[  +8.191354] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
	[  +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
	[Nov 1 23:10] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000405] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.011308] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  23:14:03 up 56 min,  0 users,  load average: 0.14, 0.48, 0.59
	Linux test-preload-230809 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-11-01 23:08:12 UTC, end at Tue 2022-11-01 23:14:03 UTC. --
	Nov 01 23:12:27 test-preload-230809 kubelet[4572]: I1101 23:12:27.245759    4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
	Nov 01 23:12:27 test-preload-230809 kubelet[4572]: E1101 23:12:27.246301    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:12:42 test-preload-230809 kubelet[4572]: I1101 23:12:42.245149    4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
	Nov 01 23:12:42 test-preload-230809 kubelet[4572]: E1101 23:12:42.245496    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:12:54 test-preload-230809 kubelet[4572]: I1101 23:12:54.245511    4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
	Nov 01 23:12:54 test-preload-230809 kubelet[4572]: I1101 23:12:54.741370    4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
	Nov 01 23:12:54 test-preload-230809 kubelet[4572]: I1101 23:12:54.741691    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:12:54 test-preload-230809 kubelet[4572]: E1101 23:12:54.742118    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:12:59 test-preload-230809 kubelet[4572]: I1101 23:12:59.347713    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:12:59 test-preload-230809 kubelet[4572]: E1101 23:12:59.348045    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:01 test-preload-230809 kubelet[4572]: I1101 23:13:01.926828    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:13:01 test-preload-230809 kubelet[4572]: E1101 23:13:01.927359    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:02 test-preload-230809 kubelet[4572]: I1101 23:13:02.759079    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:13:02 test-preload-230809 kubelet[4572]: E1101 23:13:02.759443    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:15 test-preload-230809 kubelet[4572]: I1101 23:13:15.245201    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:13:15 test-preload-230809 kubelet[4572]: E1101 23:13:15.245539    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:28 test-preload-230809 kubelet[4572]: I1101 23:13:28.244979    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:13:28 test-preload-230809 kubelet[4572]: E1101 23:13:28.245543    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:41 test-preload-230809 kubelet[4572]: I1101 23:13:41.245914    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:13:41 test-preload-230809 kubelet[4572]: E1101 23:13:41.246296    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:54 test-preload-230809 kubelet[4572]: I1101 23:13:54.245553    4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
	Nov 01 23:13:54 test-preload-230809 kubelet[4572]: E1101 23:13:54.245880    4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
	Nov 01 23:13:59 test-preload-230809 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Nov 01 23:13:59 test-preload-230809 systemd[1]: kubelet.service: Succeeded.
	Nov 01 23:13:59 test-preload-230809 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 23:14:03.605087  132226 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-230809 -n test-preload-230809
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-230809 -n test-preload-230809: exit status 2 (344.059582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-230809" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-230809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-230809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-230809: (2.342431706s)
--- FAIL: TestPreload (356.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (583.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-231829 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-231829 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.618689548s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-231829

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-231829: (15.258538782s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-231829 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-231829 status --format={{.Host}}: exit status 7 (127.411934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-231829 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-231829 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m44.357942762s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-231829] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-231829 in cluster kubernetes-upgrade-231829
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-231829" ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Nov 01 23:27:18 kubernetes-upgrade-231829 kubelet[12586]: E1101 23:27:18.390338   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12597]: E1101 23:27:19.139874   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12609]: E1101 23:27:19.896066   12609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 23:19:24.348610  185407 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:19:24.348825  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:19:24.348837  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:19:24.348844  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:19:24.348987  185407 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:19:24.349545  185407 out.go:303] Setting JSON to false
	I1101 23:19:24.351503  185407 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3710,"bootTime":1667341054,"procs":1126,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 23:19:24.351608  185407 start.go:126] virtualization: kvm guest
	I1101 23:19:24.354917  185407 out.go:177] * [kubernetes-upgrade-231829] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 23:19:24.357151  185407 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 23:19:24.357112  185407 notify.go:220] Checking for updates...
	I1101 23:19:24.360506  185407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 23:19:24.362556  185407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:19:24.364159  185407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 23:19:24.365652  185407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 23:19:24.367714  185407 config.go:180] Loaded profile config "kubernetes-upgrade-231829": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1101 23:19:24.368283  185407 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 23:19:24.402604  185407 docker.go:137] docker version: linux-20.10.21
	I1101 23:19:24.402703  185407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:19:24.529728  185407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:71 SystemTime:2022-11-01 23:19:24.426259062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:19:24.529878  185407 docker.go:254] overlay module found
	I1101 23:19:24.532647  185407 out.go:177] * Using the docker driver based on existing profile
	I1101 23:19:24.534183  185407 start.go:282] selected driver: docker
	I1101 23:19:24.534221  185407 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-231829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-231829 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:19:24.534343  185407 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 23:19:24.535461  185407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:19:24.659757  185407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:72 SystemTime:2022-11-01 23:19:24.561823334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:19:24.660086  185407 cni.go:95] Creating CNI manager for ""
	I1101 23:19:24.660111  185407 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:19:24.660132  185407 start_flags.go:317] config:
	{Name:kubernetes-upgrade-231829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-231829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:19:24.663838  185407 out.go:177] * Starting control plane node kubernetes-upgrade-231829 in cluster kubernetes-upgrade-231829
	I1101 23:19:24.665572  185407 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 23:19:24.667372  185407 out.go:177] * Pulling base image ...
	I1101 23:19:24.668964  185407 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:19:24.669008  185407 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1101 23:19:24.669017  185407 cache.go:57] Caching tarball of preloaded images
	I1101 23:19:24.669060  185407 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 23:19:24.669292  185407 preload.go:174] Found /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 23:19:24.669312  185407 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1101 23:19:24.669460  185407 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/config.json ...
	I1101 23:19:24.704330  185407 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 23:19:24.704358  185407 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 23:19:24.704370  185407 cache.go:208] Successfully downloaded all kic artifacts
	I1101 23:19:24.704402  185407 start.go:364] acquiring machines lock for kubernetes-upgrade-231829: {Name:mk22f53ec5d5b3621daa2b9ea6c8ed32ff56597c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 23:19:24.704493  185407 start.go:368] acquired machines lock for "kubernetes-upgrade-231829" in 67.045µs
	I1101 23:19:24.704517  185407 start.go:96] Skipping create...Using existing machine configuration
	I1101 23:19:24.704524  185407 fix.go:55] fixHost starting: 
	I1101 23:19:24.704779  185407 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-231829 --format={{.State.Status}}
	I1101 23:19:24.733032  185407 fix.go:103] recreateIfNeeded on kubernetes-upgrade-231829: state=Stopped err=<nil>
	W1101 23:19:24.733078  185407 fix.go:129] unexpected machine state, will restart: <nil>
	I1101 23:19:24.735491  185407 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-231829" ...
	I1101 23:19:24.737185  185407 cli_runner.go:164] Run: docker start kubernetes-upgrade-231829
	I1101 23:19:25.416680  185407 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-231829 --format={{.State.Status}}
	I1101 23:19:25.449545  185407 kic.go:415] container "kubernetes-upgrade-231829" state is running.
	I1101 23:19:25.449983  185407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-231829
	I1101 23:19:25.480526  185407 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/config.json ...
	I1101 23:19:25.480765  185407 machine.go:88] provisioning docker machine ...
	I1101 23:19:25.480804  185407 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-231829"
	I1101 23:19:25.480859  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:25.519325  185407 main.go:134] libmachine: Using SSH client type: native
	I1101 23:19:25.520130  185407 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49358 <nil> <nil>}
	I1101 23:19:25.520158  185407 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-231829 && echo "kubernetes-upgrade-231829" | sudo tee /etc/hostname
	I1101 23:19:25.520769  185407 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43434->127.0.0.1:49358: read: connection reset by peer
	I1101 23:19:28.658182  185407 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-231829
	
	I1101 23:19:28.658270  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:28.685992  185407 main.go:134] libmachine: Using SSH client type: native
	I1101 23:19:28.686145  185407 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49358 <nil> <nil>}
	I1101 23:19:28.686165  185407 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-231829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-231829/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-231829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 23:19:28.811244  185407 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 23:19:28.811273  185407 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
	I1101 23:19:28.811302  185407 ubuntu.go:177] setting up certificates
	I1101 23:19:28.811311  185407 provision.go:83] configureAuth start
	I1101 23:19:28.811369  185407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-231829
	I1101 23:19:28.845601  185407 provision.go:138] copyHostCerts
	I1101 23:19:28.845662  185407 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
	I1101 23:19:28.845672  185407 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
	I1101 23:19:28.845734  185407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
	I1101 23:19:28.845842  185407 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
	I1101 23:19:28.845857  185407 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
	I1101 23:19:28.845887  185407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
	I1101 23:19:28.845951  185407 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
	I1101 23:19:28.845966  185407 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
	I1101 23:19:28.845993  185407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
	I1101 23:19:28.846051  185407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-231829 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-231829]
	I1101 23:19:28.985070  185407 provision.go:172] copyRemoteCerts
	I1101 23:19:28.985129  185407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 23:19:28.985163  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:29.010198  185407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49358 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/kubernetes-upgrade-231829/id_rsa Username:docker}
	I1101 23:19:29.102390  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 23:19:29.121072  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 23:19:29.138844  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 23:19:29.155904  185407 provision.go:86] duration metric: configureAuth took 344.578259ms
	I1101 23:19:29.155930  185407 ubuntu.go:193] setting minikube options for container-runtime
	I1101 23:19:29.156124  185407 config.go:180] Loaded profile config "kubernetes-upgrade-231829": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:19:29.156138  185407 machine.go:91] provisioned docker machine in 3.675354065s
	I1101 23:19:29.156147  185407 start.go:300] post-start starting for "kubernetes-upgrade-231829" (driver="docker")
	I1101 23:19:29.156156  185407 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 23:19:29.156203  185407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 23:19:29.156240  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:29.185162  185407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49358 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/kubernetes-upgrade-231829/id_rsa Username:docker}
	I1101 23:19:29.272422  185407 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 23:19:29.275387  185407 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 23:19:29.275448  185407 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 23:19:29.275463  185407 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 23:19:29.275476  185407 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 23:19:29.275491  185407 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
	I1101 23:19:29.275533  185407 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
	I1101 23:19:29.275592  185407 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
	I1101 23:19:29.275675  185407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 23:19:29.282045  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:19:29.298543  185407 start.go:303] post-start completed in 142.381819ms
	I1101 23:19:29.298615  185407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:19:29.298644  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:29.325778  185407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49358 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/kubernetes-upgrade-231829/id_rsa Username:docker}
	I1101 23:19:29.407746  185407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 23:19:29.411594  185407 fix.go:57] fixHost completed within 4.707064434s
	I1101 23:19:29.411621  185407 start.go:83] releasing machines lock for "kubernetes-upgrade-231829", held for 4.707117791s
	I1101 23:19:29.411692  185407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-231829
	I1101 23:19:29.444058  185407 ssh_runner.go:195] Run: systemctl --version
	I1101 23:19:29.444120  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:29.444135  185407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 23:19:29.444200  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:29.473159  185407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49358 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/kubernetes-upgrade-231829/id_rsa Username:docker}
	I1101 23:19:29.476710  185407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49358 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/kubernetes-upgrade-231829/id_rsa Username:docker}
	I1101 23:19:29.555464  185407 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 23:19:29.589795  185407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 23:19:29.598783  185407 docker.go:189] disabling docker service ...
	I1101 23:19:29.598827  185407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 23:19:29.607648  185407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 23:19:29.616874  185407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 23:19:29.703297  185407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 23:19:29.788949  185407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 23:19:29.797994  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 23:19:29.810560  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1101 23:19:29.819490  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1101 23:19:29.828518  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1101 23:19:29.838090  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1101 23:19:29.848601  185407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 23:19:29.856733  185407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 23:19:29.863582  185407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:19:29.950555  185407 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:19:30.018172  185407 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 23:19:30.018242  185407 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 23:19:30.022349  185407 start.go:472] Will wait 60s for crictl version
	I1101 23:19:30.022403  185407 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:19:30.059669  185407 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-01T23:19:30Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1101 23:19:41.107032  185407 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:19:41.132525  185407 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1101 23:19:41.132586  185407 ssh_runner.go:195] Run: containerd --version
	I1101 23:19:41.159816  185407 ssh_runner.go:195] Run: containerd --version
	I1101 23:19:41.192866  185407 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1101 23:19:41.194646  185407 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-231829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:19:41.217549  185407 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 23:19:41.220937  185407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 23:19:41.232402  185407 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1101 23:19:41.234008  185407 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:19:41.234081  185407 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:19:41.257705  185407 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.3". assuming images are not preloaded.
	I1101 23:19:41.257762  185407 ssh_runner.go:195] Run: which lz4
	I1101 23:19:41.261107  185407 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 23:19:41.264103  185407 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1101 23:19:41.264139  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (669534256 bytes)
	I1101 23:19:42.534125  185407 containerd.go:496] Took 1.273050 seconds to copy over tarball
	I1101 23:19:42.534217  185407 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 23:19:45.100599  185407 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.566350665s)
	I1101 23:19:45.100640  185407 containerd.go:503] Took 2.566463 seconds t extract the tarball
	I1101 23:19:45.100651  185407 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 23:19:46.475284  185407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:19:46.554148  185407 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:19:46.672878  185407 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:19:46.701862  185407 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 23:19:46.701943  185407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:19:46.701953  185407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1101 23:19:46.701973  185407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.3
	I1101 23:19:46.701982  185407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.3
	I1101 23:19:46.702014  185407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.3
	I1101 23:19:46.702064  185407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I1101 23:19:46.701952  185407 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I1101 23:19:46.702193  185407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I1101 23:19:46.703256  185407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I1101 23:19:46.703266  185407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:19:46.703267  185407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.3: Error: No such image: registry.k8s.io/kube-proxy:v1.25.3
	I1101 23:19:46.703366  185407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.3: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1101 23:19:46.703458  185407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.3: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.3
	I1101 23:19:46.703359  185407 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I1101 23:19:46.703567  185407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.3: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.3
	I1101 23:19:46.703743  185407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I1101 23:19:46.865582  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.3"
	I1101 23:19:46.869518  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.3"
	I1101 23:19:46.874778  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I1101 23:19:46.877832  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I1101 23:19:46.878470  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I1101 23:19:46.900162  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.3"
	I1101 23:19:46.922643  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.3"
	I1101 23:19:47.478181  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 23:19:47.614572  185407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.3" does not exist at hash "60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a" in container runtime
	I1101 23:19:47.614660  185407 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.3
	I1101 23:19:47.614722  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.632230  185407 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I1101 23:19:47.632280  185407 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I1101 23:19:47.632318  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.632401  185407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.3" needs transfer: "registry.k8s.io/kube-proxy:v1.25.3" does not exist at hash "beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041" in container runtime
	I1101 23:19:47.632429  185407 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.3
	I1101 23:19:47.632492  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.632613  185407 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I1101 23:19:47.632640  185407 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I1101 23:19:47.632683  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.632798  185407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I1101 23:19:47.632823  185407 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I1101 23:19:47.632853  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.656932  185407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.3" does not exist at hash "0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0" in container runtime
	I1101 23:19:47.656994  185407 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.3
	I1101 23:19:47.657035  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.724074  185407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.3" does not exist at hash "6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912" in container runtime
	I1101 23:19:47.724125  185407 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.3
	I1101 23:19:47.724182  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.842943  185407 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 23:19:47.842995  185407 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:19:47.843023  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3
	I1101 23:19:47.843029  185407 ssh_runner.go:195] Run: which crictl
	I1101 23:19:47.843087  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I1101 23:19:47.843165  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I1101 23:19:47.843221  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3
	I1101 23:19:47.843248  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I1101 23:19:47.843327  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3
	I1101 23:19:47.843391  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3
	I1101 23:19:49.677990  185407 ssh_runner.go:235] Completed: which crictl: (1.834897885s)
	I1101 23:19:49.678043  185407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:19:49.677985  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.3: (1.834924088s)
	I1101 23:19:49.678136  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3
	I1101 23:19:49.678221  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1101 23:19:49.978949  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8: (2.135820235s)
	I1101 23:19:49.978980  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I1101 23:19:49.979056  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I1101 23:19:49.986249  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3: (2.142972121s)
	I1101 23:19:49.986279  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I1101 23:19:49.986286  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.3: (2.143032991s)
	I1101 23:19:49.986306  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3
	I1101 23:19:49.986320  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0: (2.14312795s)
	I1101 23:19:49.986338  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I1101 23:19:49.986344  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.3: (2.142994751s)
	I1101 23:19:49.986354  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I1101 23:19:49.986356  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3
	I1101 23:19:49.986376  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3
	I1101 23:19:49.986399  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I1101 23:19:49.986413  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1101 23:19:49.986419  185407 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.3: (2.142978708s)
	I1101 23:19:49.986430  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3
	I1101 23:19:49.986449  185407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 23:19:49.986485  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.3': No such file or directory
	I1101 23:19:49.986494  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1101 23:19:49.986499  185407 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:19:49.986502  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 --> /var/lib/minikube/images/kube-controller-manager_v1.25.3 (31264768 bytes)
	I1101 23:19:49.986548  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I1101 23:19:49.986567  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I1101 23:19:49.993438  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I1101 23:19:49.993486  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I1101 23:19:49.995800  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.3': No such file or directory
	I1101 23:19:49.995825  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 --> /var/lib/minikube/images/kube-scheduler_v1.25.3 (15801856 bytes)
	I1101 23:19:49.995842  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 23:19:49.995865  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1101 23:19:49.995902  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I1101 23:19:49.995915  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I1101 23:19:49.995925  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.3': No such file or directory
	I1101 23:19:49.995936  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 --> /var/lib/minikube/images/kube-apiserver_v1.25.3 (34241024 bytes)
	I1101 23:19:49.997251  185407 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.3': No such file or directory
	I1101 23:19:49.997275  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 --> /var/lib/minikube/images/kube-proxy_v1.25.3 (20268032 bytes)
	W1101 23:19:50.007361  185407 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1101 23:19:50.007459  185407 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	I1101 23:19:50.024075  185407 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I1101 23:19:50.024154  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I1101 23:19:50.024238  185407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-231829
	I1101 23:19:50.058252  185407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49358 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/kubernetes-upgrade-231829/id_rsa Username:docker}
	I1101 23:19:50.420353  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I1101 23:19:50.420430  185407 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:19:50.420506  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1101 23:19:51.553972  185407 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.133434189s)
	I1101 23:19:51.554015  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 23:19:51.554036  185407 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I1101 23:19:51.554117  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I1101 23:19:52.235019  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I1101 23:19:52.235060  185407 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1101 23:19:52.235102  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.3
	I1101 23:19:53.173235  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.3 from cache
	I1101 23:19:53.173275  185407 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1101 23:19:53.173317  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3
	I1101 23:19:54.495497  185407 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.3: (1.32214575s)
	I1101 23:19:54.495533  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.3 from cache
	I1101 23:19:54.495563  185407 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1101 23:19:54.495617  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3
	I1101 23:19:56.008178  185407 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.3: (1.512534825s)
	I1101 23:19:56.008199  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.3 from cache
	I1101 23:19:56.008215  185407 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.3
	I1101 23:19:56.008248  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.3
	I1101 23:19:56.682522  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.3 from cache
	I1101 23:19:56.682558  185407 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I1101 23:19:56.682600  185407 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I1101 23:20:01.617509  185407 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (4.934867706s)
	I1101 23:20:01.617546  185407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I1101 23:20:01.617580  185407 cache_images.go:123] Successfully loaded all cached images
	I1101 23:20:01.617591  185407 cache_images.go:92] LoadImages completed in 14.915695678s
	I1101 23:20:01.617643  185407 ssh_runner.go:195] Run: sudo crictl info
	I1101 23:20:01.683029  185407 cni.go:95] Creating CNI manager for ""
	I1101 23:20:01.683064  185407 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:20:01.683081  185407 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 23:20:01.683099  185407 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-231829 NodeName:kubernetes-upgrade-231829 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 23:20:01.683261  185407 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-231829"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 23:20:01.683371  185407 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-231829 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-231829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 23:20:01.683461  185407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 23:20:01.693400  185407 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 23:20:01.693475  185407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 23:20:01.701556  185407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (549 bytes)
	I1101 23:20:01.716006  185407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 23:20:01.731815  185407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I1101 23:20:01.745910  185407 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 23:20:01.749236  185407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 23:20:01.782797  185407 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829 for IP: 192.168.76.2
	I1101 23:20:01.782917  185407 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
	I1101 23:20:01.782963  185407 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
	I1101 23:20:01.783047  185407 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/client.key
	I1101 23:20:01.783130  185407 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/apiserver.key.31bdca25
	I1101 23:20:01.783183  185407 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/proxy-client.key
	I1101 23:20:01.783307  185407 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
	W1101 23:20:01.783342  185407 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
	I1101 23:20:01.783365  185407 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 23:20:01.783427  185407 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
	I1101 23:20:01.783468  185407 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
	I1101 23:20:01.783500  185407 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
	I1101 23:20:01.783554  185407 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:20:01.784140  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 23:20:01.807730  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 23:20:01.826955  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 23:20:01.846425  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 23:20:01.872108  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 23:20:01.905322  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 23:20:01.926386  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 23:20:01.944493  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 23:20:01.962241  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 23:20:01.982020  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
	I1101 23:20:02.011703  185407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
	I1101 23:20:02.030763  185407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 23:20:02.044585  185407 ssh_runner.go:195] Run: openssl version
	I1101 23:20:02.049561  185407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 23:20:02.081541  185407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:20:02.085176  185407 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:20:02.085224  185407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:20:02.089931  185407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 23:20:02.104082  185407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
	I1101 23:20:02.111602  185407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
	I1101 23:20:02.114655  185407 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:50 /usr/share/ca-certificates/12840.pem
	I1101 23:20:02.114703  185407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
	I1101 23:20:02.120246  185407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
	I1101 23:20:02.127742  185407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
	I1101 23:20:02.135611  185407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
	I1101 23:20:02.139929  185407 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:50 /usr/share/ca-certificates/128402.pem
	I1101 23:20:02.139989  185407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
	I1101 23:20:02.146764  185407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 23:20:02.157822  185407 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-231829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-231829 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:20:02.157933  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 23:20:02.157968  185407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:20:02.187754  185407 cri.go:87] found id: ""
	I1101 23:20:02.187816  185407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 23:20:02.195203  185407 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1101 23:20:02.195240  185407 kubeadm.go:627] restartCluster start
	I1101 23:20:02.195279  185407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 23:20:02.202884  185407 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 23:20:02.203505  185407 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-231829" does not appear in /home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:20:02.203770  185407 kubeconfig.go:146] "kubernetes-upgrade-231829" context is missing from /home/jenkins/minikube-integration/15232-6112/kubeconfig - will repair!
	I1101 23:20:02.204207  185407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/kubeconfig: {Name:mk05c0f2e138ac359064389ca5eb4fadba1c406f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:20:02.204958  185407 kapi.go:59] client config for kubernetes-upgrade-231829: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kubernetes-upgrade-231829/client.key", CAFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 23:20:02.205577  185407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 23:20:02.212947  185407 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-11-01 23:18:40.203544628 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-11-01 23:20:01.741908065 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-231829
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.25.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1101 23:20:02.212968  185407 kubeadm.go:1114] stopping kube-system containers ...
	I1101 23:20:02.212980  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1101 23:20:02.213024  185407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:20:02.242794  185407 cri.go:87] found id: ""
	I1101 23:20:02.242864  185407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 23:20:02.253347  185407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:20:02.260556  185407 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Nov  1 23:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Nov  1 23:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Nov  1 23:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Nov  1 23:18 /etc/kubernetes/scheduler.conf
	
	I1101 23:20:02.260612  185407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 23:20:02.267482  185407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 23:20:02.275477  185407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 23:20:02.282541  185407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 23:20:02.289637  185407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:20:02.297317  185407 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1101 23:20:02.297340  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:20:02.349999  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:20:03.323033  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:20:03.481506  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:20:03.539946  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 23:20:03.584876  185407 api_server.go:51] waiting for apiserver process to appear ...
	I1101 23:20:03.584949  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:04.094476  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:04.594955  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:05.094996  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:05.595030  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:06.094977  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:06.595246  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:07.094579  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:07.595322  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:08.095052  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:08.595112  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:09.094276  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:09.594270  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:10.094271  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:10.594888  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:11.094693  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:11.594862  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:12.094902  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:12.594584  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:13.094694  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:13.594272  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:14.094466  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:14.594753  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:15.095004  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:15.595126  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:16.094403  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:16.594919  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:17.094357  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:17.595132  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:18.094882  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:18.594730  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:19.094597  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:19.594944  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:20.094881  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:20.594389  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:21.095204  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:21.595237  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:22.094493  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:22.594740  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:23.094607  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:23.595132  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:24.094344  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:24.594625  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:25.094866  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:25.595075  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:26.094544  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:26.594391  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:27.094283  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:27.594793  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:28.095109  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:28.594458  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:29.094993  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:29.595084  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:30.094644  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:30.594843  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:31.094256  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:31.594880  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:32.094235  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:32.594787  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:33.094998  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:33.594938  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:34.094981  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:34.594372  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:35.094690  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:35.594804  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:36.094755  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:36.594964  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:37.094669  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:37.594429  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:38.094424  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:38.594894  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:39.095129  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:39.594937  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:40.094258  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:40.594232  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:41.094225  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:41.594918  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:42.095169  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:42.594613  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:43.094660  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:43.594705  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:44.095164  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:44.595108  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:45.094332  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:45.594354  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:46.094295  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:46.594534  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:47.094842  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:47.594850  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:48.094975  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:48.594315  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:49.094416  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:49.594920  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:50.094941  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:50.594985  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:51.094597  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:51.594559  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:52.094240  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:52.594589  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:53.094447  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:53.594887  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:54.094273  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:54.595337  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:55.094384  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:55.594516  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:56.094256  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:56.594414  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:57.095118  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:57.594558  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:58.094382  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:58.594571  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:59.094641  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:20:59.594961  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:00.095247  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:00.594892  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:01.094656  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:01.594902  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:02.094594  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:02.594516  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:03.094669  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:03.594858  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:21:03.594938  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:21:03.626954  185407 cri.go:87] found id: ""
	I1101 23:21:03.626981  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.627006  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:21:03.627013  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:21:03.627063  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:21:03.659192  185407 cri.go:87] found id: ""
	I1101 23:21:03.659219  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.659227  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:21:03.659235  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:21:03.659291  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:21:03.686542  185407 cri.go:87] found id: ""
	I1101 23:21:03.686568  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.686575  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:21:03.686581  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:21:03.686633  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:21:03.715042  185407 cri.go:87] found id: ""
	I1101 23:21:03.715065  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.715072  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:21:03.715079  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:21:03.715149  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:21:03.743939  185407 cri.go:87] found id: ""
	I1101 23:21:03.743961  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.743968  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:21:03.743974  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:21:03.744033  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:21:03.768973  185407 cri.go:87] found id: ""
	I1101 23:21:03.769023  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.769032  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:21:03.769040  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:21:03.769124  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:21:03.791965  185407 cri.go:87] found id: ""
	I1101 23:21:03.791993  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.792013  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:21:03.792021  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:21:03.792080  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:21:03.817345  185407 cri.go:87] found id: ""
	I1101 23:21:03.817373  185407 logs.go:274] 0 containers: []
	W1101 23:21:03.817383  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:21:03.817400  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:21:03.817416  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:21:03.833951  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:21:03.833984  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:21:03.892312  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:21:03.892336  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:21:03.892349  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:21:03.928340  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:21:03.928384  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:21:03.957518  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:21:03.957549  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:21:03.974717  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:14 kubernetes-upgrade-231829 kubelet[1388]: E1101 23:20:14.161209    1388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.975205  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:14 kubernetes-upgrade-231829 kubelet[1400]: E1101 23:20:14.891306    1400 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.975719  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:15 kubernetes-upgrade-231829 kubelet[1413]: E1101 23:20:15.642751    1413 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.976077  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:16 kubernetes-upgrade-231829 kubelet[1427]: E1101 23:20:16.392796    1427 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.976422  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:17 kubernetes-upgrade-231829 kubelet[1439]: E1101 23:20:17.147844    1439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.976769  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:17 kubernetes-upgrade-231829 kubelet[1454]: E1101 23:20:17.901162    1454 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.977134  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:18 kubernetes-upgrade-231829 kubelet[1467]: E1101 23:20:18.655455    1467 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.977492  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:19 kubernetes-upgrade-231829 kubelet[1483]: E1101 23:20:19.401295    1483 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.977858  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:20 kubernetes-upgrade-231829 kubelet[1495]: E1101 23:20:20.146082    1495 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.978209  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:20 kubernetes-upgrade-231829 kubelet[1510]: E1101 23:20:20.898158    1510 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.978552  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:21 kubernetes-upgrade-231829 kubelet[1522]: E1101 23:20:21.646426    1522 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.978899  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:22 kubernetes-upgrade-231829 kubelet[1536]: E1101 23:20:22.399618    1536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.979279  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:23 kubernetes-upgrade-231829 kubelet[1549]: E1101 23:20:23.161232    1549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.979698  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:23 kubernetes-upgrade-231829 kubelet[1564]: E1101 23:20:23.901046    1564 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.980042  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:24 kubernetes-upgrade-231829 kubelet[1577]: E1101 23:20:24.651106    1577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.980386  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:25 kubernetes-upgrade-231829 kubelet[1591]: E1101 23:20:25.397915    1591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.980730  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:26 kubernetes-upgrade-231829 kubelet[1603]: E1101 23:20:26.154278    1603 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.981069  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:26 kubernetes-upgrade-231829 kubelet[1618]: E1101 23:20:26.913264    1618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.981417  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:27 kubernetes-upgrade-231829 kubelet[1631]: E1101 23:20:27.649645    1631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.981786  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:28 kubernetes-upgrade-231829 kubelet[1645]: E1101 23:20:28.398833    1645 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.982149  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:29 kubernetes-upgrade-231829 kubelet[1658]: E1101 23:20:29.147795    1658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.982500  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:29 kubernetes-upgrade-231829 kubelet[1674]: E1101 23:20:29.897167    1674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.982868  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:30 kubernetes-upgrade-231829 kubelet[1687]: E1101 23:20:30.664295    1687 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.983230  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:31 kubernetes-upgrade-231829 kubelet[1702]: E1101 23:20:31.400357    1702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.983613  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:32 kubernetes-upgrade-231829 kubelet[1715]: E1101 23:20:32.140095    1715 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.983969  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:32 kubernetes-upgrade-231829 kubelet[1730]: E1101 23:20:32.891799    1730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.984318  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:33 kubernetes-upgrade-231829 kubelet[1743]: E1101 23:20:33.649715    1743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.984657  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:34 kubernetes-upgrade-231829 kubelet[1757]: E1101 23:20:34.397954    1757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.985020  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:35 kubernetes-upgrade-231829 kubelet[1771]: E1101 23:20:35.157946    1771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.985368  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:35 kubernetes-upgrade-231829 kubelet[1784]: E1101 23:20:35.895737    1784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.985719  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:36 kubernetes-upgrade-231829 kubelet[1798]: E1101 23:20:36.641537    1798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.986066  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:37 kubernetes-upgrade-231829 kubelet[1813]: E1101 23:20:37.394479    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.986419  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:38 kubernetes-upgrade-231829 kubelet[1826]: E1101 23:20:38.141279    1826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.986769  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:38 kubernetes-upgrade-231829 kubelet[1842]: E1101 23:20:38.903929    1842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.987116  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:39 kubernetes-upgrade-231829 kubelet[1855]: E1101 23:20:39.648534    1855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.987480  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:40 kubernetes-upgrade-231829 kubelet[1871]: E1101 23:20:40.391059    1871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.987842  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:41 kubernetes-upgrade-231829 kubelet[1884]: E1101 23:20:41.140631    1884 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.988193  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:41 kubernetes-upgrade-231829 kubelet[1900]: E1101 23:20:41.892418    1900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.988530  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:42 kubernetes-upgrade-231829 kubelet[1913]: E1101 23:20:42.637626    1913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.988878  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:43 kubernetes-upgrade-231829 kubelet[1928]: E1101 23:20:43.393249    1928 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.989239  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:44 kubernetes-upgrade-231829 kubelet[1941]: E1101 23:20:44.138138    1941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.989592  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:44 kubernetes-upgrade-231829 kubelet[1956]: E1101 23:20:44.891555    1956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.989944  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:45 kubernetes-upgrade-231829 kubelet[1969]: E1101 23:20:45.639072    1969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.990291  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:46 kubernetes-upgrade-231829 kubelet[1984]: E1101 23:20:46.394166    1984 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.990632  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[1997]: E1101 23:20:47.139952    1997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.990978  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[2012]: E1101 23:20:47.893725    2012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.991320  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:48 kubernetes-upgrade-231829 kubelet[2026]: E1101 23:20:48.640523    2026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.991754  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:49 kubernetes-upgrade-231829 kubelet[2042]: E1101 23:20:49.394841    2042 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.992099  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2056]: E1101 23:20:50.152975    2056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.992440  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2071]: E1101 23:20:50.903122    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.992803  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:51 kubernetes-upgrade-231829 kubelet[2084]: E1101 23:20:51.651761    2084 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.993149  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:52 kubernetes-upgrade-231829 kubelet[2100]: E1101 23:20:52.399392    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.993490  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2112]: E1101 23:20:53.152135    2112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.993831  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2127]: E1101 23:20:53.889839    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.994179  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:54 kubernetes-upgrade-231829 kubelet[2141]: E1101 23:20:54.640073    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.994518  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:55 kubernetes-upgrade-231829 kubelet[2157]: E1101 23:20:55.398070    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.994871  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2170]: E1101 23:20:56.148544    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.995278  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2184]: E1101 23:20:56.887440    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.995656  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:57 kubernetes-upgrade-231829 kubelet[2198]: E1101 23:20:57.641590    2198 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.996005  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:58 kubernetes-upgrade-231829 kubelet[2215]: E1101 23:20:58.388944    2215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.996379  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2228]: E1101 23:20:59.143388    2228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.996890  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2243]: E1101 23:20:59.890904    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.997283  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.997726  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.998241  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.998827  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.999267  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:03.999410  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:03.999451  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:21:03.999567  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:21:03.999579  185407 out.go:239]   Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.999589  185407 out.go:239]   Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.999606  185407 out.go:239]   Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.999614  185407 out.go:239]   Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:03.999622  185407 out.go:239]   Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:03.999632  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:03.999639  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:21:14.000996  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:14.095118  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:21:14.095181  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:21:14.122689  185407 cri.go:87] found id: ""
	I1101 23:21:14.122747  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.122757  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:21:14.122765  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:21:14.122817  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:21:14.149647  185407 cri.go:87] found id: ""
	I1101 23:21:14.149670  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.149679  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:21:14.149686  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:21:14.149744  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:21:14.175790  185407 cri.go:87] found id: ""
	I1101 23:21:14.175819  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.175831  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:21:14.175839  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:21:14.175890  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:21:14.200788  185407 cri.go:87] found id: ""
	I1101 23:21:14.200811  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.200817  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:21:14.200825  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:21:14.200878  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:21:14.224179  185407 cri.go:87] found id: ""
	I1101 23:21:14.224213  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.224222  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:21:14.224231  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:21:14.224275  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:21:14.248897  185407 cri.go:87] found id: ""
	I1101 23:21:14.248924  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.248932  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:21:14.248940  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:21:14.248996  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:21:14.273224  185407 cri.go:87] found id: ""
	I1101 23:21:14.273247  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.273253  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:21:14.273260  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:21:14.273307  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:21:14.296116  185407 cri.go:87] found id: ""
	I1101 23:21:14.296139  185407 logs.go:274] 0 containers: []
	W1101 23:21:14.296145  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:21:14.296153  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:21:14.296164  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:21:14.313736  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:24 kubernetes-upgrade-231829 kubelet[1577]: E1101 23:20:24.651106    1577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.314399  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:25 kubernetes-upgrade-231829 kubelet[1591]: E1101 23:20:25.397915    1591 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.314802  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:26 kubernetes-upgrade-231829 kubelet[1603]: E1101 23:20:26.154278    1603 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.315165  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:26 kubernetes-upgrade-231829 kubelet[1618]: E1101 23:20:26.913264    1618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.315558  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:27 kubernetes-upgrade-231829 kubelet[1631]: E1101 23:20:27.649645    1631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.315924  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:28 kubernetes-upgrade-231829 kubelet[1645]: E1101 23:20:28.398833    1645 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.316285  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:29 kubernetes-upgrade-231829 kubelet[1658]: E1101 23:20:29.147795    1658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.316650  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:29 kubernetes-upgrade-231829 kubelet[1674]: E1101 23:20:29.897167    1674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.317009  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:30 kubernetes-upgrade-231829 kubelet[1687]: E1101 23:20:30.664295    1687 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.317372  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:31 kubernetes-upgrade-231829 kubelet[1702]: E1101 23:20:31.400357    1702 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.317733  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:32 kubernetes-upgrade-231829 kubelet[1715]: E1101 23:20:32.140095    1715 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.318091  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:32 kubernetes-upgrade-231829 kubelet[1730]: E1101 23:20:32.891799    1730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.318448  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:33 kubernetes-upgrade-231829 kubelet[1743]: E1101 23:20:33.649715    1743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.318814  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:34 kubernetes-upgrade-231829 kubelet[1757]: E1101 23:20:34.397954    1757 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.319175  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:35 kubernetes-upgrade-231829 kubelet[1771]: E1101 23:20:35.157946    1771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.319555  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:35 kubernetes-upgrade-231829 kubelet[1784]: E1101 23:20:35.895737    1784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.319917  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:36 kubernetes-upgrade-231829 kubelet[1798]: E1101 23:20:36.641537    1798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.320282  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:37 kubernetes-upgrade-231829 kubelet[1813]: E1101 23:20:37.394479    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.320645  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:38 kubernetes-upgrade-231829 kubelet[1826]: E1101 23:20:38.141279    1826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.320992  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:38 kubernetes-upgrade-231829 kubelet[1842]: E1101 23:20:38.903929    1842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.321336  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:39 kubernetes-upgrade-231829 kubelet[1855]: E1101 23:20:39.648534    1855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.321683  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:40 kubernetes-upgrade-231829 kubelet[1871]: E1101 23:20:40.391059    1871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.322050  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:41 kubernetes-upgrade-231829 kubelet[1884]: E1101 23:20:41.140631    1884 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.322421  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:41 kubernetes-upgrade-231829 kubelet[1900]: E1101 23:20:41.892418    1900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.322796  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:42 kubernetes-upgrade-231829 kubelet[1913]: E1101 23:20:42.637626    1913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.323168  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:43 kubernetes-upgrade-231829 kubelet[1928]: E1101 23:20:43.393249    1928 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.323611  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:44 kubernetes-upgrade-231829 kubelet[1941]: E1101 23:20:44.138138    1941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.324018  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:44 kubernetes-upgrade-231829 kubelet[1956]: E1101 23:20:44.891555    1956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.324388  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:45 kubernetes-upgrade-231829 kubelet[1969]: E1101 23:20:45.639072    1969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.324760  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:46 kubernetes-upgrade-231829 kubelet[1984]: E1101 23:20:46.394166    1984 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.325163  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[1997]: E1101 23:20:47.139952    1997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.325549  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[2012]: E1101 23:20:47.893725    2012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.325908  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:48 kubernetes-upgrade-231829 kubelet[2026]: E1101 23:20:48.640523    2026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.326274  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:49 kubernetes-upgrade-231829 kubelet[2042]: E1101 23:20:49.394841    2042 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.326631  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2056]: E1101 23:20:50.152975    2056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.327009  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2071]: E1101 23:20:50.903122    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.327383  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:51 kubernetes-upgrade-231829 kubelet[2084]: E1101 23:20:51.651761    2084 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.327758  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:52 kubernetes-upgrade-231829 kubelet[2100]: E1101 23:20:52.399392    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.328118  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2112]: E1101 23:20:53.152135    2112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.328475  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2127]: E1101 23:20:53.889839    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.328835  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:54 kubernetes-upgrade-231829 kubelet[2141]: E1101 23:20:54.640073    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.329188  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:55 kubernetes-upgrade-231829 kubelet[2157]: E1101 23:20:55.398070    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.329557  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2170]: E1101 23:20:56.148544    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.329913  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2184]: E1101 23:20:56.887440    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.330288  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:57 kubernetes-upgrade-231829 kubelet[2198]: E1101 23:20:57.641590    2198 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.330659  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:58 kubernetes-upgrade-231829 kubelet[2215]: E1101 23:20:58.388944    2215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.331015  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2228]: E1101 23:20:59.143388    2228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.331379  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2243]: E1101 23:20:59.890904    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.331754  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.332123  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.332480  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.332856  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.333217  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.333578  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:04 kubernetes-upgrade-231829 kubelet[2460]: E1101 23:21:04.393168    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.333948  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2471]: E1101 23:21:05.140677    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.334325  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2482]: E1101 23:21:05.889642    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.334695  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:06 kubernetes-upgrade-231829 kubelet[2493]: E1101 23:21:06.639917    2493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.335045  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:07 kubernetes-upgrade-231829 kubelet[2504]: E1101 23:21:07.388854    2504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.335557  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2515]: E1101 23:21:08.139674    2515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.335946  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2527]: E1101 23:21:08.890019    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.336332  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:09 kubernetes-upgrade-231829 kubelet[2538]: E1101 23:21:09.642321    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.336695  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:10 kubernetes-upgrade-231829 kubelet[2549]: E1101 23:21:10.391251    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.337096  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.337457  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.337818  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.338172  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.338532  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:14.338660  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:21:14.338678  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:21:14.353158  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:21:14.353184  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:21:14.408396  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:21:14.408420  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:21:14.408429  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:21:14.443305  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:21:14.443340  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:21:14.470161  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:14.470187  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:21:14.470289  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:21:14.470301  185407 out.go:239]   Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.470306  185407 out.go:239]   Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.470311  185407 out.go:239]   Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.470316  185407 out.go:239]   Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:14.470327  185407 out.go:239]   Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:14.470331  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:14.470336  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:21:24.472019  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:24.594972  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:21:24.595038  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:21:24.619539  185407 cri.go:87] found id: ""
	I1101 23:21:24.619564  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.619571  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:21:24.619579  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:21:24.619636  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:21:24.648201  185407 cri.go:87] found id: ""
	I1101 23:21:24.648228  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.648235  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:21:24.648243  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:21:24.648295  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:21:24.672674  185407 cri.go:87] found id: ""
	I1101 23:21:24.672707  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.672716  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:21:24.672723  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:21:24.672768  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:21:24.695755  185407 cri.go:87] found id: ""
	I1101 23:21:24.695778  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.695789  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:21:24.695795  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:21:24.695835  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:21:24.720469  185407 cri.go:87] found id: ""
	I1101 23:21:24.720491  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.720498  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:21:24.720504  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:21:24.720553  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:21:24.744826  185407 cri.go:87] found id: ""
	I1101 23:21:24.744854  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.744862  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:21:24.744871  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:21:24.744925  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:21:24.768955  185407 cri.go:87] found id: ""
	I1101 23:21:24.768978  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.768986  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:21:24.768994  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:21:24.769068  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:21:24.791316  185407 cri.go:87] found id: ""
	I1101 23:21:24.791343  185407 logs.go:274] 0 containers: []
	W1101 23:21:24.791350  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:21:24.791360  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:21:24.791374  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:21:24.816602  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:21:24.816627  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:21:24.833688  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:35 kubernetes-upgrade-231829 kubelet[1771]: E1101 23:20:35.157946    1771 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.834280  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:35 kubernetes-upgrade-231829 kubelet[1784]: E1101 23:20:35.895737    1784 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.834821  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:36 kubernetes-upgrade-231829 kubelet[1798]: E1101 23:20:36.641537    1798 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.835241  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:37 kubernetes-upgrade-231829 kubelet[1813]: E1101 23:20:37.394479    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.835677  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:38 kubernetes-upgrade-231829 kubelet[1826]: E1101 23:20:38.141279    1826 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.836043  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:38 kubernetes-upgrade-231829 kubelet[1842]: E1101 23:20:38.903929    1842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.836402  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:39 kubernetes-upgrade-231829 kubelet[1855]: E1101 23:20:39.648534    1855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.836757  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:40 kubernetes-upgrade-231829 kubelet[1871]: E1101 23:20:40.391059    1871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.837116  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:41 kubernetes-upgrade-231829 kubelet[1884]: E1101 23:20:41.140631    1884 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.837476  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:41 kubernetes-upgrade-231829 kubelet[1900]: E1101 23:20:41.892418    1900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.837831  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:42 kubernetes-upgrade-231829 kubelet[1913]: E1101 23:20:42.637626    1913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.838186  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:43 kubernetes-upgrade-231829 kubelet[1928]: E1101 23:20:43.393249    1928 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.838551  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:44 kubernetes-upgrade-231829 kubelet[1941]: E1101 23:20:44.138138    1941 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.838914  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:44 kubernetes-upgrade-231829 kubelet[1956]: E1101 23:20:44.891555    1956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.839288  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:45 kubernetes-upgrade-231829 kubelet[1969]: E1101 23:20:45.639072    1969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.839673  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:46 kubernetes-upgrade-231829 kubelet[1984]: E1101 23:20:46.394166    1984 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.840055  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[1997]: E1101 23:20:47.139952    1997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.840416  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[2012]: E1101 23:20:47.893725    2012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.840775  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:48 kubernetes-upgrade-231829 kubelet[2026]: E1101 23:20:48.640523    2026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.841135  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:49 kubernetes-upgrade-231829 kubelet[2042]: E1101 23:20:49.394841    2042 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.841493  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2056]: E1101 23:20:50.152975    2056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.841845  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2071]: E1101 23:20:50.903122    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.842204  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:51 kubernetes-upgrade-231829 kubelet[2084]: E1101 23:20:51.651761    2084 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.842565  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:52 kubernetes-upgrade-231829 kubelet[2100]: E1101 23:20:52.399392    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.842919  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2112]: E1101 23:20:53.152135    2112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.843271  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2127]: E1101 23:20:53.889839    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.843671  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:54 kubernetes-upgrade-231829 kubelet[2141]: E1101 23:20:54.640073    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.844058  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:55 kubernetes-upgrade-231829 kubelet[2157]: E1101 23:20:55.398070    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.844426  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2170]: E1101 23:20:56.148544    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.844819  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2184]: E1101 23:20:56.887440    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.845184  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:57 kubernetes-upgrade-231829 kubelet[2198]: E1101 23:20:57.641590    2198 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.845559  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:58 kubernetes-upgrade-231829 kubelet[2215]: E1101 23:20:58.388944    2215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.845919  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2228]: E1101 23:20:59.143388    2228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.846275  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2243]: E1101 23:20:59.890904    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.846635  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.846991  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.847349  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.847758  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.848129  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.848497  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:04 kubernetes-upgrade-231829 kubelet[2460]: E1101 23:21:04.393168    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.848868  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2471]: E1101 23:21:05.140677    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.849222  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2482]: E1101 23:21:05.889642    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.849581  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:06 kubernetes-upgrade-231829 kubelet[2493]: E1101 23:21:06.639917    2493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.849933  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:07 kubernetes-upgrade-231829 kubelet[2504]: E1101 23:21:07.388854    2504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.850288  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2515]: E1101 23:21:08.139674    2515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.850645  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2527]: E1101 23:21:08.890019    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.851002  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:09 kubernetes-upgrade-231829 kubelet[2538]: E1101 23:21:09.642321    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.851370  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:10 kubernetes-upgrade-231829 kubelet[2549]: E1101 23:21:10.391251    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.851749  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.852107  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.852464  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.852840  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.853207  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.853576  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2754]: E1101 23:21:14.888015    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.853932  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:15 kubernetes-upgrade-231829 kubelet[2765]: E1101 23:21:15.638952    2765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.854293  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:16 kubernetes-upgrade-231829 kubelet[2777]: E1101 23:21:16.389971    2777 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.854656  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2789]: E1101 23:21:17.144871    2789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.855013  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2801]: E1101 23:21:17.891844    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.855371  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:18 kubernetes-upgrade-231829 kubelet[2812]: E1101 23:21:18.637939    2812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.855763  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:19 kubernetes-upgrade-231829 kubelet[2823]: E1101 23:21:19.389772    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.856125  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2833]: E1101 23:21:20.138094    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.856485  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2844]: E1101 23:21:20.890983    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.856880  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.857242  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.857611  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.857974  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.858369  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:24.858506  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:21:24.858523  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:21:24.872712  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:21:24.872739  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:21:24.926448  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:21:24.926469  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:21:24.926478  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:21:24.960658  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:24.960686  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:21:24.960793  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:21:24.960805  185407 out.go:239]   Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.960811  185407 out.go:239]   Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.960815  185407 out.go:239]   Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.960822  185407 out.go:239]   Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:24.960827  185407 out.go:239]   Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:24.960832  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:24.960838  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:21:34.962413  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:35.094615  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:21:35.094699  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:21:35.119180  185407 cri.go:87] found id: ""
	I1101 23:21:35.119202  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.119208  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:21:35.119215  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:21:35.119257  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:21:35.143632  185407 cri.go:87] found id: ""
	I1101 23:21:35.143656  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.143662  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:21:35.143668  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:21:35.143718  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:21:35.167324  185407 cri.go:87] found id: ""
	I1101 23:21:35.167350  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.167356  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:21:35.167362  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:21:35.167431  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:21:35.190465  185407 cri.go:87] found id: ""
	I1101 23:21:35.190506  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.190514  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:21:35.190524  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:21:35.190578  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:21:35.213115  185407 cri.go:87] found id: ""
	I1101 23:21:35.213139  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.213145  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:21:35.213150  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:21:35.213188  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:21:35.235897  185407 cri.go:87] found id: ""
	I1101 23:21:35.235944  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.235951  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:21:35.235959  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:21:35.236001  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:21:35.263801  185407 cri.go:87] found id: ""
	I1101 23:21:35.263825  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.263831  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:21:35.263837  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:21:35.263884  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:21:35.287548  185407 cri.go:87] found id: ""
	I1101 23:21:35.287570  185407 logs.go:274] 0 containers: []
	W1101 23:21:35.287576  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:21:35.287585  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:21:35.287595  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:21:35.341517  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:21:35.341541  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:21:35.341555  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:21:35.374778  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:21:35.374814  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:21:35.400246  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:21:35.400275  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:21:35.416977  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:45 kubernetes-upgrade-231829 kubelet[1969]: E1101 23:20:45.639072    1969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.417339  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:46 kubernetes-upgrade-231829 kubelet[1984]: E1101 23:20:46.394166    1984 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.417690  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[1997]: E1101 23:20:47.139952    1997 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.418032  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:47 kubernetes-upgrade-231829 kubelet[2012]: E1101 23:20:47.893725    2012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.418382  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:48 kubernetes-upgrade-231829 kubelet[2026]: E1101 23:20:48.640523    2026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.418733  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:49 kubernetes-upgrade-231829 kubelet[2042]: E1101 23:20:49.394841    2042 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.419080  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2056]: E1101 23:20:50.152975    2056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.419449  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:50 kubernetes-upgrade-231829 kubelet[2071]: E1101 23:20:50.903122    2071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.419813  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:51 kubernetes-upgrade-231829 kubelet[2084]: E1101 23:20:51.651761    2084 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.420156  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:52 kubernetes-upgrade-231829 kubelet[2100]: E1101 23:20:52.399392    2100 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.420500  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2112]: E1101 23:20:53.152135    2112 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.420851  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:53 kubernetes-upgrade-231829 kubelet[2127]: E1101 23:20:53.889839    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.421210  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:54 kubernetes-upgrade-231829 kubelet[2141]: E1101 23:20:54.640073    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.421561  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:55 kubernetes-upgrade-231829 kubelet[2157]: E1101 23:20:55.398070    2157 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.421907  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2170]: E1101 23:20:56.148544    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.422255  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2184]: E1101 23:20:56.887440    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.422602  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:57 kubernetes-upgrade-231829 kubelet[2198]: E1101 23:20:57.641590    2198 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.422947  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:58 kubernetes-upgrade-231829 kubelet[2215]: E1101 23:20:58.388944    2215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.423288  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2228]: E1101 23:20:59.143388    2228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.423660  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2243]: E1101 23:20:59.890904    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.424001  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.424361  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.424709  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.425139  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.425530  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.425912  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:04 kubernetes-upgrade-231829 kubelet[2460]: E1101 23:21:04.393168    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.426262  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2471]: E1101 23:21:05.140677    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.426601  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2482]: E1101 23:21:05.889642    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.426945  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:06 kubernetes-upgrade-231829 kubelet[2493]: E1101 23:21:06.639917    2493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.427310  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:07 kubernetes-upgrade-231829 kubelet[2504]: E1101 23:21:07.388854    2504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.427686  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2515]: E1101 23:21:08.139674    2515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.428026  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2527]: E1101 23:21:08.890019    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.428390  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:09 kubernetes-upgrade-231829 kubelet[2538]: E1101 23:21:09.642321    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.428738  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:10 kubernetes-upgrade-231829 kubelet[2549]: E1101 23:21:10.391251    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.429084  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.429472  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.429831  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.430176  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.430550  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.430892  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2754]: E1101 23:21:14.888015    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.431240  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:15 kubernetes-upgrade-231829 kubelet[2765]: E1101 23:21:15.638952    2765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.431641  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:16 kubernetes-upgrade-231829 kubelet[2777]: E1101 23:21:16.389971    2777 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.431993  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2789]: E1101 23:21:17.144871    2789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.432339  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2801]: E1101 23:21:17.891844    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.432724  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:18 kubernetes-upgrade-231829 kubelet[2812]: E1101 23:21:18.637939    2812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.433079  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:19 kubernetes-upgrade-231829 kubelet[2823]: E1101 23:21:19.389772    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.433419  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2833]: E1101 23:21:20.138094    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.433769  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2844]: E1101 23:21:20.890983    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.434114  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.434471  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.434861  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.435277  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.435713  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.436084  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:25 kubernetes-upgrade-231829 kubelet[3046]: E1101 23:21:25.390263    3046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.436431  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3058]: E1101 23:21:26.139842    3058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.436779  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3068]: E1101 23:21:26.888850    3068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.437134  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:27 kubernetes-upgrade-231829 kubelet[3079]: E1101 23:21:27.640861    3079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.437472  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:28 kubernetes-upgrade-231829 kubelet[3090]: E1101 23:21:28.389301    3090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.437814  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3101]: E1101 23:21:29.139073    3101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.438152  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3113]: E1101 23:21:29.889802    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.438503  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:30 kubernetes-upgrade-231829 kubelet[3124]: E1101 23:21:30.641839    3124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.438883  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:31 kubernetes-upgrade-231829 kubelet[3136]: E1101 23:21:31.397544    3136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.439230  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.439611  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.439963  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.440314  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.440671  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:35.440789  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:21:35.440804  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:21:35.454551  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:35.454573  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:21:35.454662  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:21:35.454673  185407 out.go:239]   Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.454680  185407 out.go:239]   Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.454685  185407 out.go:239]   Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.454690  185407 out.go:239]   Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:35.454695  185407 out.go:239]   Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:35.454698  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:35.454703  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:21:45.456036  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:45.595197  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:21:45.595273  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:21:45.624135  185407 cri.go:87] found id: ""
	I1101 23:21:45.624166  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.624175  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:21:45.624183  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:21:45.624237  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:21:45.651189  185407 cri.go:87] found id: ""
	I1101 23:21:45.651231  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.651237  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:21:45.651243  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:21:45.651281  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:21:45.673503  185407 cri.go:87] found id: ""
	I1101 23:21:45.673532  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.673539  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:21:45.673548  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:21:45.673598  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:21:45.695537  185407 cri.go:87] found id: ""
	I1101 23:21:45.695558  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.695565  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:21:45.695572  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:21:45.695626  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:21:45.719017  185407 cri.go:87] found id: ""
	I1101 23:21:45.719046  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.719059  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:21:45.719067  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:21:45.719107  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:21:45.741259  185407 cri.go:87] found id: ""
	I1101 23:21:45.741286  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.741295  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:21:45.741303  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:21:45.741343  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:21:45.763697  185407 cri.go:87] found id: ""
	I1101 23:21:45.763724  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.763732  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:21:45.763741  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:21:45.763792  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:21:45.787436  185407 cri.go:87] found id: ""
	I1101 23:21:45.787464  185407 logs.go:274] 0 containers: []
	W1101 23:21:45.787469  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:21:45.787478  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:21:45.787488  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:21:45.804304  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2170]: E1101 23:20:56.148544    2170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.804710  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:56 kubernetes-upgrade-231829 kubelet[2184]: E1101 23:20:56.887440    2184 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.805081  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:57 kubernetes-upgrade-231829 kubelet[2198]: E1101 23:20:57.641590    2198 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.805421  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:58 kubernetes-upgrade-231829 kubelet[2215]: E1101 23:20:58.388944    2215 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.805778  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2228]: E1101 23:20:59.143388    2228 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.806122  185407 logs.go:138] Found kubelet problem: Nov 01 23:20:59 kubernetes-upgrade-231829 kubelet[2243]: E1101 23:20:59.890904    2243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.806466  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:00 kubernetes-upgrade-231829 kubelet[2256]: E1101 23:21:00.636636    2256 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.806823  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:01 kubernetes-upgrade-231829 kubelet[2270]: E1101 23:21:01.388089    2270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.807203  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2283]: E1101 23:21:02.144525    2283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.807579  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:02 kubernetes-upgrade-231829 kubelet[2298]: E1101 23:21:02.892813    2298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.807924  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:03 kubernetes-upgrade-231829 kubelet[2312]: E1101 23:21:03.652890    2312 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.808274  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:04 kubernetes-upgrade-231829 kubelet[2460]: E1101 23:21:04.393168    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.808619  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2471]: E1101 23:21:05.140677    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.808969  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:05 kubernetes-upgrade-231829 kubelet[2482]: E1101 23:21:05.889642    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.809308  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:06 kubernetes-upgrade-231829 kubelet[2493]: E1101 23:21:06.639917    2493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.809658  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:07 kubernetes-upgrade-231829 kubelet[2504]: E1101 23:21:07.388854    2504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.810002  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2515]: E1101 23:21:08.139674    2515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.810349  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2527]: E1101 23:21:08.890019    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.810712  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:09 kubernetes-upgrade-231829 kubelet[2538]: E1101 23:21:09.642321    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.811063  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:10 kubernetes-upgrade-231829 kubelet[2549]: E1101 23:21:10.391251    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.811438  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.811782  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.812149  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.812488  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.812857  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.813196  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2754]: E1101 23:21:14.888015    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.813538  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:15 kubernetes-upgrade-231829 kubelet[2765]: E1101 23:21:15.638952    2765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.813884  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:16 kubernetes-upgrade-231829 kubelet[2777]: E1101 23:21:16.389971    2777 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.814224  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2789]: E1101 23:21:17.144871    2789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.814560  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2801]: E1101 23:21:17.891844    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.814908  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:18 kubernetes-upgrade-231829 kubelet[2812]: E1101 23:21:18.637939    2812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.815252  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:19 kubernetes-upgrade-231829 kubelet[2823]: E1101 23:21:19.389772    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.815632  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2833]: E1101 23:21:20.138094    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.815977  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2844]: E1101 23:21:20.890983    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.816320  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.816667  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.817015  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.817354  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.817703  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.818059  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:25 kubernetes-upgrade-231829 kubelet[3046]: E1101 23:21:25.390263    3046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.818406  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3058]: E1101 23:21:26.139842    3058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.818754  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3068]: E1101 23:21:26.888850    3068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.819129  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:27 kubernetes-upgrade-231829 kubelet[3079]: E1101 23:21:27.640861    3079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.819578  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:28 kubernetes-upgrade-231829 kubelet[3090]: E1101 23:21:28.389301    3090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.819932  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3101]: E1101 23:21:29.139073    3101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.820289  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3113]: E1101 23:21:29.889802    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.820629  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:30 kubernetes-upgrade-231829 kubelet[3124]: E1101 23:21:30.641839    3124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.820971  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:31 kubernetes-upgrade-231829 kubelet[3136]: E1101 23:21:31.397544    3136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.821326  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.821665  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.822007  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.822351  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.822697  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.823037  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3340]: E1101 23:21:35.891046    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.823383  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:36 kubernetes-upgrade-231829 kubelet[3351]: E1101 23:21:36.641483    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.823755  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:37 kubernetes-upgrade-231829 kubelet[3361]: E1101 23:21:37.387783    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.824132  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3372]: E1101 23:21:38.138877    3372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.824481  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3383]: E1101 23:21:38.890301    3383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.824856  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:39 kubernetes-upgrade-231829 kubelet[3395]: E1101 23:21:39.638023    3395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.825229  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:40 kubernetes-upgrade-231829 kubelet[3407]: E1101 23:21:40.388668    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.825572  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3418]: E1101 23:21:41.140119    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.825931  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3429]: E1101 23:21:41.888168    3429 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.826274  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.826621  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.826972  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.827325  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.827688  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:45.827804  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:21:45.827818  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:21:45.842331  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:21:45.842359  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:21:45.895299  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:21:45.895320  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:21:45.895333  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:21:45.930055  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:21:45.930084  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:21:45.958003  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:45.958033  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:21:45.958156  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:21:45.958173  185407 out.go:239]   Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.958180  185407 out.go:239]   Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.958186  185407 out.go:239]   Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.958196  185407 out.go:239]   Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:45.958203  185407 out.go:239]   Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:45.958211  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:45.958219  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:21:55.958475  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:21:56.094195  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:21:56.094263  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:21:56.118485  185407 cri.go:87] found id: ""
	I1101 23:21:56.118526  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.118535  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:21:56.118549  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:21:56.118633  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:21:56.151830  185407 cri.go:87] found id: ""
	I1101 23:21:56.151856  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.151863  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:21:56.151871  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:21:56.151970  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:21:56.177331  185407 cri.go:87] found id: ""
	I1101 23:21:56.177363  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.177373  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:21:56.177382  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:21:56.177434  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:21:56.202437  185407 cri.go:87] found id: ""
	I1101 23:21:56.202468  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.202477  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:21:56.202485  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:21:56.202530  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:21:56.230195  185407 cri.go:87] found id: ""
	I1101 23:21:56.230229  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.230238  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:21:56.230247  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:21:56.230296  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:21:56.257274  185407 cri.go:87] found id: ""
	I1101 23:21:56.257302  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.257311  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:21:56.257319  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:21:56.257372  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:21:56.282619  185407 cri.go:87] found id: ""
	I1101 23:21:56.282650  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.282658  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:21:56.282667  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:21:56.282719  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:21:56.310032  185407 cri.go:87] found id: ""
	I1101 23:21:56.310060  185407 logs.go:274] 0 containers: []
	W1101 23:21:56.310068  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:21:56.310093  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:21:56.310107  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:21:56.326458  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:06 kubernetes-upgrade-231829 kubelet[2493]: E1101 23:21:06.639917    2493 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.326915  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:07 kubernetes-upgrade-231829 kubelet[2504]: E1101 23:21:07.388854    2504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.327346  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2515]: E1101 23:21:08.139674    2515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.327765  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:08 kubernetes-upgrade-231829 kubelet[2527]: E1101 23:21:08.890019    2527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.328147  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:09 kubernetes-upgrade-231829 kubelet[2538]: E1101 23:21:09.642321    2538 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.328523  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:10 kubernetes-upgrade-231829 kubelet[2549]: E1101 23:21:10.391251    2549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.328920  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2560]: E1101 23:21:11.140830    2560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.329299  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:11 kubernetes-upgrade-231829 kubelet[2571]: E1101 23:21:11.897457    2571 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.329680  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:12 kubernetes-upgrade-231829 kubelet[2582]: E1101 23:21:12.639246    2582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.330056  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:13 kubernetes-upgrade-231829 kubelet[2593]: E1101 23:21:13.389854    2593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.330475  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2606]: E1101 23:21:14.146256    2606 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.330861  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:14 kubernetes-upgrade-231829 kubelet[2754]: E1101 23:21:14.888015    2754 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.331216  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:15 kubernetes-upgrade-231829 kubelet[2765]: E1101 23:21:15.638952    2765 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.331594  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:16 kubernetes-upgrade-231829 kubelet[2777]: E1101 23:21:16.389971    2777 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.331943  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2789]: E1101 23:21:17.144871    2789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.332292  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2801]: E1101 23:21:17.891844    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.332648  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:18 kubernetes-upgrade-231829 kubelet[2812]: E1101 23:21:18.637939    2812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.333017  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:19 kubernetes-upgrade-231829 kubelet[2823]: E1101 23:21:19.389772    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.333371  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2833]: E1101 23:21:20.138094    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.333724  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2844]: E1101 23:21:20.890983    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.334074  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.334429  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.334792  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.335203  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.335754  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.336152  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:25 kubernetes-upgrade-231829 kubelet[3046]: E1101 23:21:25.390263    3046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.336513  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3058]: E1101 23:21:26.139842    3058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.336886  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3068]: E1101 23:21:26.888850    3068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.337254  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:27 kubernetes-upgrade-231829 kubelet[3079]: E1101 23:21:27.640861    3079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.337633  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:28 kubernetes-upgrade-231829 kubelet[3090]: E1101 23:21:28.389301    3090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.338036  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3101]: E1101 23:21:29.139073    3101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.338412  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3113]: E1101 23:21:29.889802    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.338818  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:30 kubernetes-upgrade-231829 kubelet[3124]: E1101 23:21:30.641839    3124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.339180  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:31 kubernetes-upgrade-231829 kubelet[3136]: E1101 23:21:31.397544    3136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.339615  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.339965  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.340312  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.340689  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.341096  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.341476  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3340]: E1101 23:21:35.891046    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.341914  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:36 kubernetes-upgrade-231829 kubelet[3351]: E1101 23:21:36.641483    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.342293  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:37 kubernetes-upgrade-231829 kubelet[3361]: E1101 23:21:37.387783    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.342682  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3372]: E1101 23:21:38.138877    3372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.343059  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3383]: E1101 23:21:38.890301    3383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.343524  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:39 kubernetes-upgrade-231829 kubelet[3395]: E1101 23:21:39.638023    3395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.343921  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:40 kubernetes-upgrade-231829 kubelet[3407]: E1101 23:21:40.388668    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.344327  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3418]: E1101 23:21:41.140119    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.344708  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3429]: E1101 23:21:41.888168    3429 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.345082  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.345526  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.345961  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.346357  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.346740  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.347125  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:46 kubernetes-upgrade-231829 kubelet[3632]: E1101 23:21:46.394285    3632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.347537  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3643]: E1101 23:21:47.139709    3643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.347958  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3654]: E1101 23:21:47.889878    3654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.348340  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:48 kubernetes-upgrade-231829 kubelet[3666]: E1101 23:21:48.643727    3666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.348725  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:49 kubernetes-upgrade-231829 kubelet[3677]: E1101 23:21:49.388913    3677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.349108  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3689]: E1101 23:21:50.138547    3689 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.349489  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3700]: E1101 23:21:50.896408    3700 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.349866  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:51 kubernetes-upgrade-231829 kubelet[3711]: E1101 23:21:51.638374    3711 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.350246  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:52 kubernetes-upgrade-231829 kubelet[3722]: E1101 23:21:52.395768    3722 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.350619  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.350991  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.351367  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.351935  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.352409  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:56.352532  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:21:56.352551  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:21:56.367974  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:21:56.368010  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:21:56.436144  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:21:56.436171  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:21:56.436184  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:21:56.475992  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:21:56.476027  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:21:56.505431  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:56.505459  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:21:56.505592  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:21:56.505619  185407 out.go:239]   Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.505629  185407 out.go:239]   Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.505639  185407 out.go:239]   Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.505646  185407 out.go:239]   Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:21:56.505655  185407 out.go:239]   Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:21:56.505661  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:21:56.505670  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:22:06.507842  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:22:06.594274  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:22:06.594333  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:22:06.620511  185407 cri.go:87] found id: ""
	I1101 23:22:06.620537  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.620545  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:22:06.620553  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:22:06.620603  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:22:06.646187  185407 cri.go:87] found id: ""
	I1101 23:22:06.646226  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.646235  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:22:06.646242  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:22:06.646305  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:22:06.670041  185407 cri.go:87] found id: ""
	I1101 23:22:06.670064  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.670070  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:22:06.670076  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:22:06.670117  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:22:06.692919  185407 cri.go:87] found id: ""
	I1101 23:22:06.692945  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.692954  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:22:06.692962  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:22:06.693011  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:22:06.716824  185407 cri.go:87] found id: ""
	I1101 23:22:06.716851  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.716857  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:22:06.716863  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:22:06.716903  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:22:06.739685  185407 cri.go:87] found id: ""
	I1101 23:22:06.739712  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.739718  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:22:06.739730  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:22:06.739780  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:22:06.762460  185407 cri.go:87] found id: ""
	I1101 23:22:06.762486  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.762493  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:22:06.762499  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:22:06.762538  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:22:06.785964  185407 cri.go:87] found id: ""
	I1101 23:22:06.785991  185407 logs.go:274] 0 containers: []
	W1101 23:22:06.786000  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:22:06.786011  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:22:06.786025  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:22:06.800566  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:22:06.800590  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:22:06.854002  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:22:06.854029  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:22:06.854041  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:22:06.888160  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:22:06.888191  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:22:06.914208  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:22:06.914233  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:22:06.931321  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2789]: E1101 23:21:17.144871    2789 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.931712  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:17 kubernetes-upgrade-231829 kubelet[2801]: E1101 23:21:17.891844    2801 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.932057  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:18 kubernetes-upgrade-231829 kubelet[2812]: E1101 23:21:18.637939    2812 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.932405  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:19 kubernetes-upgrade-231829 kubelet[2823]: E1101 23:21:19.389772    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.932787  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2833]: E1101 23:21:20.138094    2833 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.933148  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:20 kubernetes-upgrade-231829 kubelet[2844]: E1101 23:21:20.890983    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.933510  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:21 kubernetes-upgrade-231829 kubelet[2856]: E1101 23:21:21.639364    2856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.934034  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:22 kubernetes-upgrade-231829 kubelet[2867]: E1101 23:21:22.388679    2867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.934440  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2878]: E1101 23:21:23.137390    2878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.934807  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:23 kubernetes-upgrade-231829 kubelet[2889]: E1101 23:21:23.889297    2889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.935183  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:24 kubernetes-upgrade-231829 kubelet[2901]: E1101 23:21:24.646913    2901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.935647  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:25 kubernetes-upgrade-231829 kubelet[3046]: E1101 23:21:25.390263    3046 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.935996  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3058]: E1101 23:21:26.139842    3058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.936338  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:26 kubernetes-upgrade-231829 kubelet[3068]: E1101 23:21:26.888850    3068 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.936844  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:27 kubernetes-upgrade-231829 kubelet[3079]: E1101 23:21:27.640861    3079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.937421  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:28 kubernetes-upgrade-231829 kubelet[3090]: E1101 23:21:28.389301    3090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.937994  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3101]: E1101 23:21:29.139073    3101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.938348  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3113]: E1101 23:21:29.889802    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.938716  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:30 kubernetes-upgrade-231829 kubelet[3124]: E1101 23:21:30.641839    3124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.939064  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:31 kubernetes-upgrade-231829 kubelet[3136]: E1101 23:21:31.397544    3136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.939454  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.939859  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.940218  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.940569  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.940910  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.941253  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3340]: E1101 23:21:35.891046    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.941610  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:36 kubernetes-upgrade-231829 kubelet[3351]: E1101 23:21:36.641483    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.941959  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:37 kubernetes-upgrade-231829 kubelet[3361]: E1101 23:21:37.387783    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.942303  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3372]: E1101 23:21:38.138877    3372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.942650  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3383]: E1101 23:21:38.890301    3383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.942993  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:39 kubernetes-upgrade-231829 kubelet[3395]: E1101 23:21:39.638023    3395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.943336  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:40 kubernetes-upgrade-231829 kubelet[3407]: E1101 23:21:40.388668    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.943720  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3418]: E1101 23:21:41.140119    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.944075  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3429]: E1101 23:21:41.888168    3429 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.944423  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.944773  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.945117  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.945464  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.945851  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.946222  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:46 kubernetes-upgrade-231829 kubelet[3632]: E1101 23:21:46.394285    3632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.946570  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3643]: E1101 23:21:47.139709    3643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.946927  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3654]: E1101 23:21:47.889878    3654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.947271  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:48 kubernetes-upgrade-231829 kubelet[3666]: E1101 23:21:48.643727    3666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.947645  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:49 kubernetes-upgrade-231829 kubelet[3677]: E1101 23:21:49.388913    3677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.947986  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3689]: E1101 23:21:50.138547    3689 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.948349  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3700]: E1101 23:21:50.896408    3700 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.948735  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:51 kubernetes-upgrade-231829 kubelet[3711]: E1101 23:21:51.638374    3711 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.949098  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:52 kubernetes-upgrade-231829 kubelet[3722]: E1101 23:21:52.395768    3722 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.949522  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.949902  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.950248  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.950599  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.950967  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.951333  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3925]: E1101 23:21:56.890070    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.951710  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:57 kubernetes-upgrade-231829 kubelet[3936]: E1101 23:21:57.638687    3936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.952079  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:58 kubernetes-upgrade-231829 kubelet[3947]: E1101 23:21:58.400409    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.952418  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3958]: E1101 23:21:59.145294    3958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.952781  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3968]: E1101 23:21:59.974939    3968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.953152  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:00 kubernetes-upgrade-231829 kubelet[3979]: E1101 23:22:00.640573    3979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.953507  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:01 kubernetes-upgrade-231829 kubelet[3991]: E1101 23:22:01.401862    3991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.953868  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4003]: E1101 23:22:02.142917    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.954262  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4014]: E1101 23:22:02.890049    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.954630  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.954997  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.955378  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.955862  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.956247  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:06.956378  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:06.956393  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:22:06.956503  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:22:06.956519  185407 out.go:239]   Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.956526  185407 out.go:239]   Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.956537  185407 out.go:239]   Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.956565  185407 out.go:239]   Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:06.956576  185407 out.go:239]   Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:06.956585  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:06.956595  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:22:16.957153  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:22:17.094880  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:22:17.094959  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:22:17.122733  185407 cri.go:87] found id: ""
	I1101 23:22:17.122757  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.122763  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:22:17.122770  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:22:17.122811  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:22:17.148381  185407 cri.go:87] found id: ""
	I1101 23:22:17.148405  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.148412  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:22:17.148417  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:22:17.148465  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:22:17.171322  185407 cri.go:87] found id: ""
	I1101 23:22:17.171348  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.171356  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:22:17.171363  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:22:17.171436  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:22:17.194113  185407 cri.go:87] found id: ""
	I1101 23:22:17.194147  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.194157  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:22:17.194167  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:22:17.194222  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:22:17.218137  185407 cri.go:87] found id: ""
	I1101 23:22:17.218157  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.218163  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:22:17.218169  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:22:17.218210  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:22:17.241366  185407 cri.go:87] found id: ""
	I1101 23:22:17.241389  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.241396  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:22:17.241402  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:22:17.241450  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:22:17.264195  185407 cri.go:87] found id: ""
	I1101 23:22:17.264217  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.264223  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:22:17.264229  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:22:17.264268  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:22:17.286358  185407 cri.go:87] found id: ""
	I1101 23:22:17.286380  185407 logs.go:274] 0 containers: []
	W1101 23:22:17.286386  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:22:17.286395  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:22:17.286405  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:22:17.302148  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:27 kubernetes-upgrade-231829 kubelet[3079]: E1101 23:21:27.640861    3079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.302540  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:28 kubernetes-upgrade-231829 kubelet[3090]: E1101 23:21:28.389301    3090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.302984  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3101]: E1101 23:21:29.139073    3101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.303354  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:29 kubernetes-upgrade-231829 kubelet[3113]: E1101 23:21:29.889802    3113 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.303763  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:30 kubernetes-upgrade-231829 kubelet[3124]: E1101 23:21:30.641839    3124 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.304137  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:31 kubernetes-upgrade-231829 kubelet[3136]: E1101 23:21:31.397544    3136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.304513  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3147]: E1101 23:21:32.140782    3147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.304881  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:32 kubernetes-upgrade-231829 kubelet[3158]: E1101 23:21:32.890801    3158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.305243  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:33 kubernetes-upgrade-231829 kubelet[3169]: E1101 23:21:33.640586    3169 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.305605  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:34 kubernetes-upgrade-231829 kubelet[3181]: E1101 23:21:34.390773    3181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.306071  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3195]: E1101 23:21:35.141797    3195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.306599  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:35 kubernetes-upgrade-231829 kubelet[3340]: E1101 23:21:35.891046    3340 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.307075  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:36 kubernetes-upgrade-231829 kubelet[3351]: E1101 23:21:36.641483    3351 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.307447  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:37 kubernetes-upgrade-231829 kubelet[3361]: E1101 23:21:37.387783    3361 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.307913  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3372]: E1101 23:21:38.138877    3372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.308496  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3383]: E1101 23:21:38.890301    3383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.309076  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:39 kubernetes-upgrade-231829 kubelet[3395]: E1101 23:21:39.638023    3395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.309578  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:40 kubernetes-upgrade-231829 kubelet[3407]: E1101 23:21:40.388668    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.309976  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3418]: E1101 23:21:41.140119    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.310544  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3429]: E1101 23:21:41.888168    3429 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.311148  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.311736  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.312353  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.312967  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.313560  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.314142  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:46 kubernetes-upgrade-231829 kubelet[3632]: E1101 23:21:46.394285    3632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.314734  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3643]: E1101 23:21:47.139709    3643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.315289  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3654]: E1101 23:21:47.889878    3654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.315665  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:48 kubernetes-upgrade-231829 kubelet[3666]: E1101 23:21:48.643727    3666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.316023  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:49 kubernetes-upgrade-231829 kubelet[3677]: E1101 23:21:49.388913    3677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.316369  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3689]: E1101 23:21:50.138547    3689 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.316718  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3700]: E1101 23:21:50.896408    3700 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.317083  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:51 kubernetes-upgrade-231829 kubelet[3711]: E1101 23:21:51.638374    3711 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.317615  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:52 kubernetes-upgrade-231829 kubelet[3722]: E1101 23:21:52.395768    3722 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.317978  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.318333  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.318678  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.319025  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.319373  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.319740  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3925]: E1101 23:21:56.890070    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.320088  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:57 kubernetes-upgrade-231829 kubelet[3936]: E1101 23:21:57.638687    3936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.320438  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:58 kubernetes-upgrade-231829 kubelet[3947]: E1101 23:21:58.400409    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.320791  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3958]: E1101 23:21:59.145294    3958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.321135  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3968]: E1101 23:21:59.974939    3968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.321478  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:00 kubernetes-upgrade-231829 kubelet[3979]: E1101 23:22:00.640573    3979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.321819  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:01 kubernetes-upgrade-231829 kubelet[3991]: E1101 23:22:01.401862    3991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.322166  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4003]: E1101 23:22:02.142917    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.322508  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4014]: E1101 23:22:02.890049    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.322856  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.323199  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.323570  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.323926  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.324360  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.324720  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:07 kubernetes-upgrade-231829 kubelet[4220]: E1101 23:22:07.387754    4220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.325074  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4231]: E1101 23:22:08.139031    4231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.325422  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4242]: E1101 23:22:08.889682    4242 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.325763  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:09 kubernetes-upgrade-231829 kubelet[4253]: E1101 23:22:09.639661    4253 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.326105  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:10 kubernetes-upgrade-231829 kubelet[4264]: E1101 23:22:10.390263    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.326445  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4275]: E1101 23:22:11.140372    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.326795  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4287]: E1101 23:22:11.890679    4287 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.327145  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:12 kubernetes-upgrade-231829 kubelet[4298]: E1101 23:22:12.641013    4298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.327519  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:13 kubernetes-upgrade-231829 kubelet[4310]: E1101 23:22:13.389025    4310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.327867  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.328218  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.328569  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.328932  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.329281  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:17.329397  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:22:17.329414  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:22:17.344703  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:22:17.344731  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:22:17.404045  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:22:17.404076  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:22:17.404087  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:22:17.439910  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:22:17.439945  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:22:17.470795  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:17.470822  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:22:17.470930  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:22:17.470946  185407 out.go:239]   Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.470952  185407 out.go:239]   Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.470960  185407 out.go:239]   Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.470966  185407 out.go:239]   Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:17.470973  185407 out.go:239]   Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:17.470980  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:17.470988  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:22:27.472277  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:22:27.594504  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:22:27.594580  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:22:27.621184  185407 cri.go:87] found id: ""
	I1101 23:22:27.621212  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.621221  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:22:27.621230  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:22:27.621281  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:22:27.647943  185407 cri.go:87] found id: ""
	I1101 23:22:27.647978  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.647988  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:22:27.647996  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:22:27.648048  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:22:27.670242  185407 cri.go:87] found id: ""
	I1101 23:22:27.670266  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.670274  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:22:27.670280  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:22:27.670318  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:22:27.692286  185407 cri.go:87] found id: ""
	I1101 23:22:27.692315  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.692324  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:22:27.692330  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:22:27.692371  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:22:27.715444  185407 cri.go:87] found id: ""
	I1101 23:22:27.715470  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.715475  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:22:27.715481  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:22:27.715541  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:22:27.739038  185407 cri.go:87] found id: ""
	I1101 23:22:27.739060  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.739066  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:22:27.739072  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:22:27.739112  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:22:27.761475  185407 cri.go:87] found id: ""
	I1101 23:22:27.761504  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.761513  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:22:27.761521  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:22:27.761573  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:22:27.784677  185407 cri.go:87] found id: ""
	I1101 23:22:27.784700  185407 logs.go:274] 0 containers: []
	W1101 23:22:27.784706  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:22:27.784714  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:22:27.784723  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:22:27.810961  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:22:27.810986  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:22:27.826198  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3372]: E1101 23:21:38.138877    3372 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.826793  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:38 kubernetes-upgrade-231829 kubelet[3383]: E1101 23:21:38.890301    3383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.827380  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:39 kubernetes-upgrade-231829 kubelet[3395]: E1101 23:21:39.638023    3395 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.828002  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:40 kubernetes-upgrade-231829 kubelet[3407]: E1101 23:21:40.388668    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.828592  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3418]: E1101 23:21:41.140119    3418 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.829176  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:41 kubernetes-upgrade-231829 kubelet[3429]: E1101 23:21:41.888168    3429 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.829756  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:42 kubernetes-upgrade-231829 kubelet[3440]: E1101 23:21:42.638590    3440 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.830300  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:43 kubernetes-upgrade-231829 kubelet[3451]: E1101 23:21:43.389266    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.830700  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3463]: E1101 23:21:44.140376    3463 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.831074  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:44 kubernetes-upgrade-231829 kubelet[3475]: E1101 23:21:44.890042    3475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.831450  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:45 kubernetes-upgrade-231829 kubelet[3488]: E1101 23:21:45.647580    3488 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.831812  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:46 kubernetes-upgrade-231829 kubelet[3632]: E1101 23:21:46.394285    3632 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.832175  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3643]: E1101 23:21:47.139709    3643 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.832518  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:47 kubernetes-upgrade-231829 kubelet[3654]: E1101 23:21:47.889878    3654 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.832874  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:48 kubernetes-upgrade-231829 kubelet[3666]: E1101 23:21:48.643727    3666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.833237  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:49 kubernetes-upgrade-231829 kubelet[3677]: E1101 23:21:49.388913    3677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.833585  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3689]: E1101 23:21:50.138547    3689 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.833928  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3700]: E1101 23:21:50.896408    3700 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.834273  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:51 kubernetes-upgrade-231829 kubelet[3711]: E1101 23:21:51.638374    3711 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.834617  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:52 kubernetes-upgrade-231829 kubelet[3722]: E1101 23:21:52.395768    3722 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.834975  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.835315  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.835743  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.836098  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.836494  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.836849  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3925]: E1101 23:21:56.890070    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.837201  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:57 kubernetes-upgrade-231829 kubelet[3936]: E1101 23:21:57.638687    3936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.837544  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:58 kubernetes-upgrade-231829 kubelet[3947]: E1101 23:21:58.400409    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.837893  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3958]: E1101 23:21:59.145294    3958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.838241  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3968]: E1101 23:21:59.974939    3968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.838592  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:00 kubernetes-upgrade-231829 kubelet[3979]: E1101 23:22:00.640573    3979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.838933  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:01 kubernetes-upgrade-231829 kubelet[3991]: E1101 23:22:01.401862    3991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.839291  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4003]: E1101 23:22:02.142917    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.839726  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4014]: E1101 23:22:02.890049    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.840084  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.840425  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.840767  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.841187  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.841779  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.842247  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:07 kubernetes-upgrade-231829 kubelet[4220]: E1101 23:22:07.387754    4220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.842591  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4231]: E1101 23:22:08.139031    4231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.842933  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4242]: E1101 23:22:08.889682    4242 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.843297  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:09 kubernetes-upgrade-231829 kubelet[4253]: E1101 23:22:09.639661    4253 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.843686  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:10 kubernetes-upgrade-231829 kubelet[4264]: E1101 23:22:10.390263    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.844037  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4275]: E1101 23:22:11.140372    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.844393  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4287]: E1101 23:22:11.890679    4287 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.844737  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:12 kubernetes-upgrade-231829 kubelet[4298]: E1101 23:22:12.641013    4298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.845084  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:13 kubernetes-upgrade-231829 kubelet[4310]: E1101 23:22:13.389025    4310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.845467  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.845809  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.846168  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.846515  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.846861  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.847212  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4518]: E1101 23:22:17.891366    4518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.847580  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:18 kubernetes-upgrade-231829 kubelet[4528]: E1101 23:22:18.644345    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.847928  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:19 kubernetes-upgrade-231829 kubelet[4539]: E1101 23:22:19.391477    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.848277  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4550]: E1101 23:22:20.139596    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.848621  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4561]: E1101 23:22:20.890912    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.848974  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:21 kubernetes-upgrade-231829 kubelet[4572]: E1101 23:22:21.638338    4572 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.849317  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:22 kubernetes-upgrade-231829 kubelet[4583]: E1101 23:22:22.387545    4583 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.849669  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4593]: E1101 23:22:23.139918    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.850013  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4604]: E1101 23:22:23.890439    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.850361  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.850709  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.851071  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.851447  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.851799  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:27.851915  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:22:27.851931  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:22:27.866539  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:22:27.866565  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:22:27.920610  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:22:27.920634  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:22:27.920643  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:22:27.954446  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:27.954473  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:22:27.954578  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:22:27.954590  185407 out.go:239]   Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.954597  185407 out.go:239]   Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.954609  185407 out.go:239]   Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.954617  185407 out.go:239]   Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:27.954629  185407 out.go:239]   Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:27.954640  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:27.954653  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:22:37.955086  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:22:38.094513  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:22:38.094574  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:22:38.121004  185407 cri.go:87] found id: ""
	I1101 23:22:38.121028  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.121042  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:22:38.121051  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:22:38.121114  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:22:38.148841  185407 cri.go:87] found id: ""
	I1101 23:22:38.148866  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.148874  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:22:38.148883  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:22:38.148960  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:22:38.174112  185407 cri.go:87] found id: ""
	I1101 23:22:38.174136  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.174143  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:22:38.174149  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:22:38.174190  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:22:38.199480  185407 cri.go:87] found id: ""
	I1101 23:22:38.199505  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.199513  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:22:38.199521  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:22:38.199577  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:22:38.223174  185407 cri.go:87] found id: ""
	I1101 23:22:38.223199  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.223210  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:22:38.223216  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:22:38.223262  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:22:38.246766  185407 cri.go:87] found id: ""
	I1101 23:22:38.246791  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.246797  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:22:38.246804  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:22:38.246850  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:22:38.270232  185407 cri.go:87] found id: ""
	I1101 23:22:38.270264  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.270272  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:22:38.270281  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:22:38.270340  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:22:38.294652  185407 cri.go:87] found id: ""
	I1101 23:22:38.294682  185407 logs.go:274] 0 containers: []
	W1101 23:22:38.294690  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:22:38.294698  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:22:38.294708  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:22:38.329982  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:22:38.330014  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:22:38.356544  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:22:38.356569  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:22:38.371355  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:48 kubernetes-upgrade-231829 kubelet[3666]: E1101 23:21:48.643727    3666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.371991  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:49 kubernetes-upgrade-231829 kubelet[3677]: E1101 23:21:49.388913    3677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.372569  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3689]: E1101 23:21:50.138547    3689 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.373158  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:50 kubernetes-upgrade-231829 kubelet[3700]: E1101 23:21:50.896408    3700 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.373729  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:51 kubernetes-upgrade-231829 kubelet[3711]: E1101 23:21:51.638374    3711 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.374304  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:52 kubernetes-upgrade-231829 kubelet[3722]: E1101 23:21:52.395768    3722 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.374879  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3733]: E1101 23:21:53.154323    3733 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.375464  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:53 kubernetes-upgrade-231829 kubelet[3744]: E1101 23:21:53.894216    3744 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.375858  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:54 kubernetes-upgrade-231829 kubelet[3756]: E1101 23:21:54.648817    3756 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.376228  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:55 kubernetes-upgrade-231829 kubelet[3767]: E1101 23:21:55.391819    3767 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.376574  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3780]: E1101 23:21:56.149341    3780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.376944  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:56 kubernetes-upgrade-231829 kubelet[3925]: E1101 23:21:56.890070    3925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.377292  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:57 kubernetes-upgrade-231829 kubelet[3936]: E1101 23:21:57.638687    3936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.377635  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:58 kubernetes-upgrade-231829 kubelet[3947]: E1101 23:21:58.400409    3947 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.377983  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3958]: E1101 23:21:59.145294    3958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.378344  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3968]: E1101 23:21:59.974939    3968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.378688  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:00 kubernetes-upgrade-231829 kubelet[3979]: E1101 23:22:00.640573    3979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.379038  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:01 kubernetes-upgrade-231829 kubelet[3991]: E1101 23:22:01.401862    3991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.379380  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4003]: E1101 23:22:02.142917    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.379744  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4014]: E1101 23:22:02.890049    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.380093  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.380433  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.380793  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.381141  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.381486  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.381837  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:07 kubernetes-upgrade-231829 kubelet[4220]: E1101 23:22:07.387754    4220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.382189  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4231]: E1101 23:22:08.139031    4231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.382539  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4242]: E1101 23:22:08.889682    4242 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.382892  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:09 kubernetes-upgrade-231829 kubelet[4253]: E1101 23:22:09.639661    4253 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.383241  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:10 kubernetes-upgrade-231829 kubelet[4264]: E1101 23:22:10.390263    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.383608  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4275]: E1101 23:22:11.140372    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.383961  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4287]: E1101 23:22:11.890679    4287 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.384310  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:12 kubernetes-upgrade-231829 kubelet[4298]: E1101 23:22:12.641013    4298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.384657  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:13 kubernetes-upgrade-231829 kubelet[4310]: E1101 23:22:13.389025    4310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.385013  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.385368  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.385712  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.386084  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.386428  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.386778  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4518]: E1101 23:22:17.891366    4518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.387128  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:18 kubernetes-upgrade-231829 kubelet[4528]: E1101 23:22:18.644345    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.387560  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:19 kubernetes-upgrade-231829 kubelet[4539]: E1101 23:22:19.391477    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.387910  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4550]: E1101 23:22:20.139596    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.388251  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4561]: E1101 23:22:20.890912    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.388598  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:21 kubernetes-upgrade-231829 kubelet[4572]: E1101 23:22:21.638338    4572 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.388954  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:22 kubernetes-upgrade-231829 kubelet[4583]: E1101 23:22:22.387545    4583 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.389300  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4593]: E1101 23:22:23.139918    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.389646  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4604]: E1101 23:22:23.890439    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.389995  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.390339  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.390690  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.391040  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.391383  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.391753  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:28 kubernetes-upgrade-231829 kubelet[4805]: E1101 23:22:28.391691    4805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.392100  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4817]: E1101 23:22:29.138872    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.392451  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4828]: E1101 23:22:29.900150    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.392813  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:30 kubernetes-upgrade-231829 kubelet[4839]: E1101 23:22:30.648388    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.393155  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:31 kubernetes-upgrade-231829 kubelet[4849]: E1101 23:22:31.430690    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.393510  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4859]: E1101 23:22:32.148929    4859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.393855  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4871]: E1101 23:22:32.888659    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.394200  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:33 kubernetes-upgrade-231829 kubelet[4881]: E1101 23:22:33.648547    4881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.394547  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:34 kubernetes-upgrade-231829 kubelet[4891]: E1101 23:22:34.397130    4891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.394893  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.395234  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.395614  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.395962  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.396320  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:38.396436  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:22:38.396451  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:22:38.410751  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:22:38.410776  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:22:38.465343  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:22:38.465372  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:38.465384  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:22:38.465505  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:22:38.465519  185407 out.go:239]   Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.465527  185407 out.go:239]   Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.465535  185407 out.go:239]   Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.465545  185407 out.go:239]   Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:38.465558  185407 out.go:239]   Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:38.465568  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:38.465580  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:22:48.467520  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:22:48.594466  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:22:48.594523  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:22:48.627883  185407 cri.go:87] found id: ""
	I1101 23:22:48.627912  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.627921  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:22:48.627930  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:22:48.627979  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:22:48.656891  185407 cri.go:87] found id: ""
	I1101 23:22:48.656918  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.656924  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:22:48.656933  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:22:48.656971  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:22:48.679280  185407 cri.go:87] found id: ""
	I1101 23:22:48.679308  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.679317  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:22:48.679325  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:22:48.679376  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:22:48.703060  185407 cri.go:87] found id: ""
	I1101 23:22:48.703084  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.703091  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:22:48.703097  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:22:48.703148  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:22:48.730141  185407 cri.go:87] found id: ""
	I1101 23:22:48.730165  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.730171  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:22:48.730179  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:22:48.730231  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:22:48.754932  185407 cri.go:87] found id: ""
	I1101 23:22:48.754960  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.754969  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:22:48.754976  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:22:48.755028  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:22:48.779045  185407 cri.go:87] found id: ""
	I1101 23:22:48.779074  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.779082  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:22:48.779090  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:22:48.779145  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:22:48.801539  185407 cri.go:87] found id: ""
	I1101 23:22:48.801565  185407 logs.go:274] 0 containers: []
	W1101 23:22:48.801574  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:22:48.801586  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:22:48.801599  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:22:48.818992  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3958]: E1101 23:21:59.145294    3958 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.819500  185407 logs.go:138] Found kubelet problem: Nov 01 23:21:59 kubernetes-upgrade-231829 kubelet[3968]: E1101 23:21:59.974939    3968 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.819937  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:00 kubernetes-upgrade-231829 kubelet[3979]: E1101 23:22:00.640573    3979 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.820492  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:01 kubernetes-upgrade-231829 kubelet[3991]: E1101 23:22:01.401862    3991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.820885  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4003]: E1101 23:22:02.142917    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.821430  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:02 kubernetes-upgrade-231829 kubelet[4014]: E1101 23:22:02.890049    4014 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.822012  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:03 kubernetes-upgrade-231829 kubelet[4025]: E1101 23:22:03.639488    4025 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.822594  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:04 kubernetes-upgrade-231829 kubelet[4036]: E1101 23:22:04.392034    4036 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.823112  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4047]: E1101 23:22:05.138946    4047 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.823499  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:05 kubernetes-upgrade-231829 kubelet[4058]: E1101 23:22:05.888813    4058 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.823866  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:06 kubernetes-upgrade-231829 kubelet[4071]: E1101 23:22:06.645296    4071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.824211  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:07 kubernetes-upgrade-231829 kubelet[4220]: E1101 23:22:07.387754    4220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.824571  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4231]: E1101 23:22:08.139031    4231 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.824919  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:08 kubernetes-upgrade-231829 kubelet[4242]: E1101 23:22:08.889682    4242 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.825256  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:09 kubernetes-upgrade-231829 kubelet[4253]: E1101 23:22:09.639661    4253 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.825627  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:10 kubernetes-upgrade-231829 kubelet[4264]: E1101 23:22:10.390263    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.825992  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4275]: E1101 23:22:11.140372    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.826336  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4287]: E1101 23:22:11.890679    4287 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.826715  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:12 kubernetes-upgrade-231829 kubelet[4298]: E1101 23:22:12.641013    4298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.827069  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:13 kubernetes-upgrade-231829 kubelet[4310]: E1101 23:22:13.389025    4310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.827441  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.827789  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.828129  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.828482  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.828848  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.829192  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4518]: E1101 23:22:17.891366    4518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.829534  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:18 kubernetes-upgrade-231829 kubelet[4528]: E1101 23:22:18.644345    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.829878  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:19 kubernetes-upgrade-231829 kubelet[4539]: E1101 23:22:19.391477    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.830223  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4550]: E1101 23:22:20.139596    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.830563  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4561]: E1101 23:22:20.890912    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.830919  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:21 kubernetes-upgrade-231829 kubelet[4572]: E1101 23:22:21.638338    4572 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.831265  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:22 kubernetes-upgrade-231829 kubelet[4583]: E1101 23:22:22.387545    4583 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.831731  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4593]: E1101 23:22:23.139918    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.832080  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4604]: E1101 23:22:23.890439    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.832447  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.832815  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.833162  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.833505  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.833873  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.834220  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:28 kubernetes-upgrade-231829 kubelet[4805]: E1101 23:22:28.391691    4805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.834562  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4817]: E1101 23:22:29.138872    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.834910  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4828]: E1101 23:22:29.900150    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.835252  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:30 kubernetes-upgrade-231829 kubelet[4839]: E1101 23:22:30.648388    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.835635  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:31 kubernetes-upgrade-231829 kubelet[4849]: E1101 23:22:31.430690    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.835986  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4859]: E1101 23:22:32.148929    4859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.836332  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4871]: E1101 23:22:32.888659    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.836710  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:33 kubernetes-upgrade-231829 kubelet[4881]: E1101 23:22:33.648547    4881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.837079  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:34 kubernetes-upgrade-231829 kubelet[4891]: E1101 23:22:34.397130    4891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.837427  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.837805  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.838160  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.838500  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.838845  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.839452  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[5094]: E1101 23:22:38.891336    5094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.840079  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:39 kubernetes-upgrade-231829 kubelet[5105]: E1101 23:22:39.639108    5105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.840725  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:40 kubernetes-upgrade-231829 kubelet[5116]: E1101 23:22:40.390019    5116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.841368  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5127]: E1101 23:22:41.141623    5127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.842019  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5138]: E1101 23:22:41.895024    5138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.842664  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:42 kubernetes-upgrade-231829 kubelet[5148]: E1101 23:22:42.645515    5148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.843302  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:43 kubernetes-upgrade-231829 kubelet[5159]: E1101 23:22:43.394868    5159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.843953  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5170]: E1101 23:22:44.141918    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.844605  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5181]: E1101 23:22:44.893182    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.845243  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.845886  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.846526  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.847168  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:48.847827  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:48.848042  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:22:48.848063  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:22:48.867356  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:22:48.867386  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:22:48.923392  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:22:48.923482  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:22:48.923501  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:22:48.975960  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:22:48.976003  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:22:49.002105  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:49.002127  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:22:49.002249  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:22:49.002266  185407 out.go:239]   Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:49.002273  185407 out.go:239]   Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:49.002281  185407 out.go:239]   Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:49.002291  185407 out.go:239]   Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:49.002299  185407 out.go:239]   Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:49.002308  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:49.002314  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:22:59.004177  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:22:59.095032  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:22:59.095102  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:22:59.123466  185407 cri.go:87] found id: ""
	I1101 23:22:59.123497  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.123507  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:22:59.123515  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:22:59.123566  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:22:59.150651  185407 cri.go:87] found id: ""
	I1101 23:22:59.150680  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.150689  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:22:59.150698  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:22:59.150757  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:22:59.176916  185407 cri.go:87] found id: ""
	I1101 23:22:59.176941  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.176948  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:22:59.176955  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:22:59.177000  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:22:59.200621  185407 cri.go:87] found id: ""
	I1101 23:22:59.200649  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.200656  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:22:59.200663  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:22:59.200711  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:22:59.231058  185407 cri.go:87] found id: ""
	I1101 23:22:59.231089  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.231097  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:22:59.231105  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:22:59.231155  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:22:59.256052  185407 cri.go:87] found id: ""
	I1101 23:22:59.256078  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.256086  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:22:59.256094  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:22:59.256146  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:22:59.279333  185407 cri.go:87] found id: ""
	I1101 23:22:59.279360  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.279368  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:22:59.279376  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:22:59.279464  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:22:59.303501  185407 cri.go:87] found id: ""
	I1101 23:22:59.303524  185407 logs.go:274] 0 containers: []
	W1101 23:22:59.303532  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:22:59.303543  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:22:59.303568  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:22:59.318795  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:22:59.318823  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:22:59.374954  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:22:59.374982  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:22:59.374991  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:22:59.409091  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:22:59.409124  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:22:59.436519  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:22:59.436547  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:22:59.456488  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:09 kubernetes-upgrade-231829 kubelet[4253]: E1101 23:22:09.639661    4253 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.457081  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:10 kubernetes-upgrade-231829 kubelet[4264]: E1101 23:22:10.390263    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.457577  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4275]: E1101 23:22:11.140372    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.457937  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:11 kubernetes-upgrade-231829 kubelet[4287]: E1101 23:22:11.890679    4287 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.458293  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:12 kubernetes-upgrade-231829 kubelet[4298]: E1101 23:22:12.641013    4298 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.458648  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:13 kubernetes-upgrade-231829 kubelet[4310]: E1101 23:22:13.389025    4310 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.459020  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4322]: E1101 23:22:14.138835    4322 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.459380  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:14 kubernetes-upgrade-231829 kubelet[4333]: E1101 23:22:14.887210    4333 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.459767  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:15 kubernetes-upgrade-231829 kubelet[4344]: E1101 23:22:15.638855    4344 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.460213  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:16 kubernetes-upgrade-231829 kubelet[4355]: E1101 23:22:16.387714    4355 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.460565  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4368]: E1101 23:22:17.146654    4368 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.460904  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:17 kubernetes-upgrade-231829 kubelet[4518]: E1101 23:22:17.891366    4518 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.461254  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:18 kubernetes-upgrade-231829 kubelet[4528]: E1101 23:22:18.644345    4528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.461598  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:19 kubernetes-upgrade-231829 kubelet[4539]: E1101 23:22:19.391477    4539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.461942  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4550]: E1101 23:22:20.139596    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.462305  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4561]: E1101 23:22:20.890912    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.462760  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:21 kubernetes-upgrade-231829 kubelet[4572]: E1101 23:22:21.638338    4572 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.463210  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:22 kubernetes-upgrade-231829 kubelet[4583]: E1101 23:22:22.387545    4583 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.463668  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4593]: E1101 23:22:23.139918    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.464086  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4604]: E1101 23:22:23.890439    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.464467  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.464850  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.465237  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.465620  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.466005  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.466374  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:28 kubernetes-upgrade-231829 kubelet[4805]: E1101 23:22:28.391691    4805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.466839  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4817]: E1101 23:22:29.138872    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.467256  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4828]: E1101 23:22:29.900150    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.467673  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:30 kubernetes-upgrade-231829 kubelet[4839]: E1101 23:22:30.648388    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.468086  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:31 kubernetes-upgrade-231829 kubelet[4849]: E1101 23:22:31.430690    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.468506  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4859]: E1101 23:22:32.148929    4859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.468881  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4871]: E1101 23:22:32.888659    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.469257  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:33 kubernetes-upgrade-231829 kubelet[4881]: E1101 23:22:33.648547    4881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.469639  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:34 kubernetes-upgrade-231829 kubelet[4891]: E1101 23:22:34.397130    4891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.470015  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.470392  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.470763  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.471132  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.471542  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.471968  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[5094]: E1101 23:22:38.891336    5094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.472346  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:39 kubernetes-upgrade-231829 kubelet[5105]: E1101 23:22:39.639108    5105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.472736  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:40 kubernetes-upgrade-231829 kubelet[5116]: E1101 23:22:40.390019    5116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.473112  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5127]: E1101 23:22:41.141623    5127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.473491  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5138]: E1101 23:22:41.895024    5138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.473864  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:42 kubernetes-upgrade-231829 kubelet[5148]: E1101 23:22:42.645515    5148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.474236  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:43 kubernetes-upgrade-231829 kubelet[5159]: E1101 23:22:43.394868    5159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.474619  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5170]: E1101 23:22:44.141918    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.474992  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5181]: E1101 23:22:44.893182    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.475365  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.475763  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.476134  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.476535  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.476950  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.477343  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:49 kubernetes-upgrade-231829 kubelet[5383]: E1101 23:22:49.397390    5383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.477722  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5394]: E1101 23:22:50.141785    5394 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.478137  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5405]: E1101 23:22:50.902939    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.478591  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:51 kubernetes-upgrade-231829 kubelet[5416]: E1101 23:22:51.638230    5416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.479012  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:52 kubernetes-upgrade-231829 kubelet[5427]: E1101 23:22:52.406197    5427 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.479523  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5437]: E1101 23:22:53.147831    5437 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.479876  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5447]: E1101 23:22:53.906690    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.480230  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:54 kubernetes-upgrade-231829 kubelet[5459]: E1101 23:22:54.644411    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.480615  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:55 kubernetes-upgrade-231829 kubelet[5469]: E1101 23:22:55.389840    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.480964  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.481317  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.481667  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.482166  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.482532  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:59.482651  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:59.482663  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:22:59.482758  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:22:59.482770  185407 out.go:239]   Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.482774  185407 out.go:239]   Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.482778  185407 out.go:239]   Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.482829  185407 out.go:239]   Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:22:59.482861  185407 out.go:239]   Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:22:59.482869  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:22:59.482879  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:23:09.484580  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:23:09.594965  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:23:09.595026  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:23:09.621597  185407 cri.go:87] found id: ""
	I1101 23:23:09.621624  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.621632  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:23:09.621640  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:23:09.621700  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:23:09.646576  185407 cri.go:87] found id: ""
	I1101 23:23:09.646604  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.646613  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:23:09.646621  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:23:09.646672  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:23:09.669877  185407 cri.go:87] found id: ""
	I1101 23:23:09.669898  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.669903  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:23:09.669910  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:23:09.669951  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:23:09.693686  185407 cri.go:87] found id: ""
	I1101 23:23:09.693717  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.693726  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:23:09.693735  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:23:09.693778  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:23:09.716104  185407 cri.go:87] found id: ""
	I1101 23:23:09.716133  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.716139  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:23:09.716145  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:23:09.716190  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:23:09.739347  185407 cri.go:87] found id: ""
	I1101 23:23:09.739376  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.739385  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:23:09.739427  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:23:09.739481  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:23:09.762118  185407 cri.go:87] found id: ""
	I1101 23:23:09.762144  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.762153  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:23:09.762161  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:23:09.762217  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:23:09.784410  185407 cri.go:87] found id: ""
	I1101 23:23:09.784434  185407 logs.go:274] 0 containers: []
	W1101 23:23:09.784440  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:23:09.784449  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:23:09.784459  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:23:09.800478  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4550]: E1101 23:22:20.139596    4550 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.800838  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:20 kubernetes-upgrade-231829 kubelet[4561]: E1101 23:22:20.890912    4561 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.801198  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:21 kubernetes-upgrade-231829 kubelet[4572]: E1101 23:22:21.638338    4572 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.801546  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:22 kubernetes-upgrade-231829 kubelet[4583]: E1101 23:22:22.387545    4583 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.801890  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4593]: E1101 23:22:23.139918    4593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.802228  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:23 kubernetes-upgrade-231829 kubelet[4604]: E1101 23:22:23.890439    4604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.802570  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:24 kubernetes-upgrade-231829 kubelet[4615]: E1101 23:22:24.638081    4615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.802914  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:25 kubernetes-upgrade-231829 kubelet[4625]: E1101 23:22:25.392450    4625 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.803266  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4635]: E1101 23:22:26.138693    4635 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.803654  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:26 kubernetes-upgrade-231829 kubelet[4646]: E1101 23:22:26.890139    4646 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.803998  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:27 kubernetes-upgrade-231829 kubelet[4659]: E1101 23:22:27.642679    4659 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.804341  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:28 kubernetes-upgrade-231829 kubelet[4805]: E1101 23:22:28.391691    4805 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.804685  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4817]: E1101 23:22:29.138872    4817 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.805038  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:29 kubernetes-upgrade-231829 kubelet[4828]: E1101 23:22:29.900150    4828 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.805383  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:30 kubernetes-upgrade-231829 kubelet[4839]: E1101 23:22:30.648388    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.805741  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:31 kubernetes-upgrade-231829 kubelet[4849]: E1101 23:22:31.430690    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.806081  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4859]: E1101 23:22:32.148929    4859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.806425  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4871]: E1101 23:22:32.888659    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.806777  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:33 kubernetes-upgrade-231829 kubelet[4881]: E1101 23:22:33.648547    4881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.807125  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:34 kubernetes-upgrade-231829 kubelet[4891]: E1101 23:22:34.397130    4891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.807492  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.807842  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.808185  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.808528  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.808880  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.809224  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[5094]: E1101 23:22:38.891336    5094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.809578  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:39 kubernetes-upgrade-231829 kubelet[5105]: E1101 23:22:39.639108    5105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.809934  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:40 kubernetes-upgrade-231829 kubelet[5116]: E1101 23:22:40.390019    5116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.810318  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5127]: E1101 23:22:41.141623    5127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.810697  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5138]: E1101 23:22:41.895024    5138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.811051  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:42 kubernetes-upgrade-231829 kubelet[5148]: E1101 23:22:42.645515    5148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.811390  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:43 kubernetes-upgrade-231829 kubelet[5159]: E1101 23:22:43.394868    5159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.811760  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5170]: E1101 23:22:44.141918    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.812133  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5181]: E1101 23:22:44.893182    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.812470  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.812827  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.813176  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.813517  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.813866  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.814210  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:49 kubernetes-upgrade-231829 kubelet[5383]: E1101 23:22:49.397390    5383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.814600  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5394]: E1101 23:22:50.141785    5394 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.814955  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5405]: E1101 23:22:50.902939    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.815295  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:51 kubernetes-upgrade-231829 kubelet[5416]: E1101 23:22:51.638230    5416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.815689  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:52 kubernetes-upgrade-231829 kubelet[5427]: E1101 23:22:52.406197    5427 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.816050  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5437]: E1101 23:22:53.147831    5437 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.816396  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5447]: E1101 23:22:53.906690    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.816742  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:54 kubernetes-upgrade-231829 kubelet[5459]: E1101 23:22:54.644411    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.817089  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:55 kubernetes-upgrade-231829 kubelet[5469]: E1101 23:22:55.389840    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.817470  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.817832  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.818181  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.818528  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.818874  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.819224  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5673]: E1101 23:22:59.893778    5673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.819663  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:00 kubernetes-upgrade-231829 kubelet[5685]: E1101 23:23:00.649173    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.820058  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:01 kubernetes-upgrade-231829 kubelet[5696]: E1101 23:23:01.390143    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.820399  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5707]: E1101 23:23:02.145486    5707 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.820771  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5719]: E1101 23:23:02.892256    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.821115  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:03 kubernetes-upgrade-231829 kubelet[5730]: E1101 23:23:03.646136    5730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.821457  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:04 kubernetes-upgrade-231829 kubelet[5741]: E1101 23:23:04.392954    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.821807  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5752]: E1101 23:23:05.148886    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.822151  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5763]: E1101 23:23:05.892201    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.822496  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.822840  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.823185  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.823553  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.823906  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:09.824024  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:23:09.824038  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:23:09.840325  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:23:09.840353  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:23:09.895299  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:23:09.895324  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:23:09.895335  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:23:09.928808  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:23:09.928838  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:23:09.953821  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:09.953845  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:23:09.953959  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:23:09.953977  185407 out.go:239]   Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.953985  185407 out.go:239]   Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.953996  185407 out.go:239]   Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.954003  185407 out.go:239]   Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:09.954009  185407 out.go:239]   Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:09.954014  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:09.954022  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:23:19.954541  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:23:20.094656  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:23:20.094745  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:23:20.120253  185407 cri.go:87] found id: ""
	I1101 23:23:20.120283  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.120293  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:23:20.120302  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:23:20.120366  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:23:20.150551  185407 cri.go:87] found id: ""
	I1101 23:23:20.150581  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.150591  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:23:20.150598  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:23:20.150650  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:23:20.174824  185407 cri.go:87] found id: ""
	I1101 23:23:20.174856  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.174864  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:23:20.174870  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:23:20.174924  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:23:20.205990  185407 cri.go:87] found id: ""
	I1101 23:23:20.206021  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.206031  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:23:20.206042  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:23:20.206099  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:23:20.237064  185407 cri.go:87] found id: ""
	I1101 23:23:20.237095  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.237105  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:23:20.237115  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:23:20.237178  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:23:20.261151  185407 cri.go:87] found id: ""
	I1101 23:23:20.261175  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.261182  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:23:20.261187  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:23:20.261236  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:23:20.284160  185407 cri.go:87] found id: ""
	I1101 23:23:20.284186  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.284192  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:23:20.284200  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:23:20.284243  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:23:20.307584  185407 cri.go:87] found id: ""
	I1101 23:23:20.307607  185407 logs.go:274] 0 containers: []
	W1101 23:23:20.307613  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:23:20.307626  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:23:20.307637  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:23:20.333763  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:23:20.333787  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:23:20.351777  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:30 kubernetes-upgrade-231829 kubelet[4839]: E1101 23:22:30.648388    4839 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.352288  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:31 kubernetes-upgrade-231829 kubelet[4849]: E1101 23:22:31.430690    4849 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.352788  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4859]: E1101 23:22:32.148929    4859 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.353138  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:32 kubernetes-upgrade-231829 kubelet[4871]: E1101 23:22:32.888659    4871 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.353484  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:33 kubernetes-upgrade-231829 kubelet[4881]: E1101 23:22:33.648547    4881 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.353822  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:34 kubernetes-upgrade-231829 kubelet[4891]: E1101 23:22:34.397130    4891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.354172  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4901]: E1101 23:22:35.148366    4901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.354553  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:35 kubernetes-upgrade-231829 kubelet[4913]: E1101 23:22:35.892523    4913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.354901  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:36 kubernetes-upgrade-231829 kubelet[4923]: E1101 23:22:36.641391    4923 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.355243  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:37 kubernetes-upgrade-231829 kubelet[4936]: E1101 23:22:37.396783    4936 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.355620  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[4948]: E1101 23:22:38.145470    4948 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.355962  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:38 kubernetes-upgrade-231829 kubelet[5094]: E1101 23:22:38.891336    5094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.356314  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:39 kubernetes-upgrade-231829 kubelet[5105]: E1101 23:22:39.639108    5105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.356665  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:40 kubernetes-upgrade-231829 kubelet[5116]: E1101 23:22:40.390019    5116 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.357014  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5127]: E1101 23:22:41.141623    5127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.357356  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5138]: E1101 23:22:41.895024    5138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.357705  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:42 kubernetes-upgrade-231829 kubelet[5148]: E1101 23:22:42.645515    5148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.358055  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:43 kubernetes-upgrade-231829 kubelet[5159]: E1101 23:22:43.394868    5159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.358394  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5170]: E1101 23:22:44.141918    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.358785  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5181]: E1101 23:22:44.893182    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.359125  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.359505  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.359852  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.360199  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.360544  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.360884  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:49 kubernetes-upgrade-231829 kubelet[5383]: E1101 23:22:49.397390    5383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.361226  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5394]: E1101 23:22:50.141785    5394 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.361591  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5405]: E1101 23:22:50.902939    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.361935  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:51 kubernetes-upgrade-231829 kubelet[5416]: E1101 23:22:51.638230    5416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.362279  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:52 kubernetes-upgrade-231829 kubelet[5427]: E1101 23:22:52.406197    5427 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.362619  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5437]: E1101 23:22:53.147831    5437 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.363026  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5447]: E1101 23:22:53.906690    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.363388  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:54 kubernetes-upgrade-231829 kubelet[5459]: E1101 23:22:54.644411    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.363793  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:55 kubernetes-upgrade-231829 kubelet[5469]: E1101 23:22:55.389840    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.364148  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.364510  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.364853  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.365205  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.365553  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.365897  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5673]: E1101 23:22:59.893778    5673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.366246  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:00 kubernetes-upgrade-231829 kubelet[5685]: E1101 23:23:00.649173    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.366599  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:01 kubernetes-upgrade-231829 kubelet[5696]: E1101 23:23:01.390143    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.366937  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5707]: E1101 23:23:02.145486    5707 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.367286  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5719]: E1101 23:23:02.892256    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.367645  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:03 kubernetes-upgrade-231829 kubelet[5730]: E1101 23:23:03.646136    5730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.367997  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:04 kubernetes-upgrade-231829 kubelet[5741]: E1101 23:23:04.392954    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.368340  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5752]: E1101 23:23:05.148886    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.368682  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5763]: E1101 23:23:05.892201    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.369024  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.369366  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.369708  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.370057  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.370394  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.370740  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:10 kubernetes-upgrade-231829 kubelet[5969]: E1101 23:23:10.390409    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.371100  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5980]: E1101 23:23:11.140278    5980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.371467  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5991]: E1101 23:23:11.890955    5991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.371822  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:12 kubernetes-upgrade-231829 kubelet[6001]: E1101 23:23:12.638998    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.372203  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:13 kubernetes-upgrade-231829 kubelet[6012]: E1101 23:23:13.395636    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.372601  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6023]: E1101 23:23:14.138409    6023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.372961  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6034]: E1101 23:23:14.892741    6034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.373347  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:15 kubernetes-upgrade-231829 kubelet[6045]: E1101 23:23:15.637535    6045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.373702  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:16 kubernetes-upgrade-231829 kubelet[6056]: E1101 23:23:16.392743    6056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.374047  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.374497  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.375114  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.375539  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.375885  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:20.376000  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:23:20.376017  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:23:20.393580  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:23:20.393609  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:23:20.448332  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:23:20.448358  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:23:20.448372  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:23:20.481491  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:20.481518  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:23:20.481621  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:23:20.481635  185407 out.go:239]   Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.481640  185407 out.go:239]   Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.481645  185407 out.go:239]   Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.481653  185407 out.go:239]   Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:20.481660  185407 out.go:239]   Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:20.481667  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:20.481673  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:23:30.483438  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:23:30.594432  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:23:30.594491  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:23:30.621400  185407 cri.go:87] found id: ""
	I1101 23:23:30.621433  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.621442  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:23:30.621451  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:23:30.621500  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:23:30.649741  185407 cri.go:87] found id: ""
	I1101 23:23:30.649765  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.649772  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:23:30.649778  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:23:30.649816  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:23:30.673636  185407 cri.go:87] found id: ""
	I1101 23:23:30.673659  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.673665  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:23:30.673671  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:23:30.673716  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:23:30.695638  185407 cri.go:87] found id: ""
	I1101 23:23:30.695660  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.695666  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:23:30.695674  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:23:30.695722  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:23:30.719134  185407 cri.go:87] found id: ""
	I1101 23:23:30.719156  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.719161  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:23:30.719170  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:23:30.719223  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:23:30.742156  185407 cri.go:87] found id: ""
	I1101 23:23:30.742180  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.742187  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:23:30.742193  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:23:30.742231  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:23:30.764517  185407 cri.go:87] found id: ""
	I1101 23:23:30.764539  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.764544  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:23:30.764550  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:23:30.764593  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:23:30.786156  185407 cri.go:87] found id: ""
	I1101 23:23:30.786184  185407 logs.go:274] 0 containers: []
	W1101 23:23:30.786190  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:23:30.786200  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:23:30.786209  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:23:30.801774  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5127]: E1101 23:22:41.141623    5127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.802135  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:41 kubernetes-upgrade-231829 kubelet[5138]: E1101 23:22:41.895024    5138 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.802477  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:42 kubernetes-upgrade-231829 kubelet[5148]: E1101 23:22:42.645515    5148 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.802864  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:43 kubernetes-upgrade-231829 kubelet[5159]: E1101 23:22:43.394868    5159 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.803242  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5170]: E1101 23:22:44.141918    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.803662  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:44 kubernetes-upgrade-231829 kubelet[5181]: E1101 23:22:44.893182    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.804031  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:45 kubernetes-upgrade-231829 kubelet[5193]: E1101 23:22:45.641198    5193 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.804399  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:46 kubernetes-upgrade-231829 kubelet[5205]: E1101 23:22:46.389810    5205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.804777  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5216]: E1101 23:22:47.149038    5216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.805146  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:47 kubernetes-upgrade-231829 kubelet[5227]: E1101 23:22:47.898200    5227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.805526  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:48 kubernetes-upgrade-231829 kubelet[5240]: E1101 23:22:48.648662    5240 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.805903  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:49 kubernetes-upgrade-231829 kubelet[5383]: E1101 23:22:49.397390    5383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.806274  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5394]: E1101 23:22:50.141785    5394 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.806653  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:50 kubernetes-upgrade-231829 kubelet[5405]: E1101 23:22:50.902939    5405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.807024  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:51 kubernetes-upgrade-231829 kubelet[5416]: E1101 23:22:51.638230    5416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.807421  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:52 kubernetes-upgrade-231829 kubelet[5427]: E1101 23:22:52.406197    5427 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.807814  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5437]: E1101 23:22:53.147831    5437 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.808192  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5447]: E1101 23:22:53.906690    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.808564  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:54 kubernetes-upgrade-231829 kubelet[5459]: E1101 23:22:54.644411    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.808934  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:55 kubernetes-upgrade-231829 kubelet[5469]: E1101 23:22:55.389840    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.809308  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.809705  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.810084  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.810457  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.810835  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.811196  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5673]: E1101 23:22:59.893778    5673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.811600  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:00 kubernetes-upgrade-231829 kubelet[5685]: E1101 23:23:00.649173    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.811972  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:01 kubernetes-upgrade-231829 kubelet[5696]: E1101 23:23:01.390143    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.812333  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5707]: E1101 23:23:02.145486    5707 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.812707  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5719]: E1101 23:23:02.892256    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.813063  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:03 kubernetes-upgrade-231829 kubelet[5730]: E1101 23:23:03.646136    5730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.813425  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:04 kubernetes-upgrade-231829 kubelet[5741]: E1101 23:23:04.392954    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.813807  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5752]: E1101 23:23:05.148886    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.814175  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5763]: E1101 23:23:05.892201    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.814534  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.814896  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.815260  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.815652  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.816017  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.816395  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:10 kubernetes-upgrade-231829 kubelet[5969]: E1101 23:23:10.390409    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.816757  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5980]: E1101 23:23:11.140278    5980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.817118  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5991]: E1101 23:23:11.890955    5991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.817477  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:12 kubernetes-upgrade-231829 kubelet[6001]: E1101 23:23:12.638998    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.817840  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:13 kubernetes-upgrade-231829 kubelet[6012]: E1101 23:23:13.395636    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.818229  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6023]: E1101 23:23:14.138409    6023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.818596  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6034]: E1101 23:23:14.892741    6034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.818955  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:15 kubernetes-upgrade-231829 kubelet[6045]: E1101 23:23:15.637535    6045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.819319  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:16 kubernetes-upgrade-231829 kubelet[6056]: E1101 23:23:16.392743    6056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.819715  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.820079  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.820439  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.820804  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.821175  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.821543  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6260]: E1101 23:23:20.890908    6260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.821904  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:21 kubernetes-upgrade-231829 kubelet[6270]: E1101 23:23:21.644007    6270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.822265  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:22 kubernetes-upgrade-231829 kubelet[6281]: E1101 23:23:22.392197    6281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.822639  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6291]: E1101 23:23:23.139205    6291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.823008  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6302]: E1101 23:23:23.891184    6302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.823364  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:24 kubernetes-upgrade-231829 kubelet[6313]: E1101 23:23:24.637909    6313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.823743  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:25 kubernetes-upgrade-231829 kubelet[6324]: E1101 23:23:25.394445    6324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.824131  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6334]: E1101 23:23:26.138983    6334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.824509  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6345]: E1101 23:23:26.890412    6345 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.824873  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:27 kubernetes-upgrade-231829 kubelet[6356]: E1101 23:23:27.673240    6356 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.825263  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:28 kubernetes-upgrade-231829 kubelet[6366]: E1101 23:23:28.391636    6366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.825638  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6377]: E1101 23:23:29.150112    6377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.826011  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6388]: E1101 23:23:29.890110    6388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.826392  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:30 kubernetes-upgrade-231829 kubelet[6401]: E1101 23:23:30.648807    6401 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:30.826534  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:23:30.826554  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:23:30.844482  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:23:30.844509  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:23:30.900816  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:23:30.900844  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:23:30.900860  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:23:30.935797  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:23:30.935827  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:23:30.960996  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:30.961020  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:23:30.961127  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:23:30.961142  185407 out.go:239]   Nov 01 23:23:27 kubernetes-upgrade-231829 kubelet[6356]: E1101 23:23:27.673240    6356 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:27 kubernetes-upgrade-231829 kubelet[6356]: E1101 23:23:27.673240    6356 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.961149  185407 out.go:239]   Nov 01 23:23:28 kubernetes-upgrade-231829 kubelet[6366]: E1101 23:23:28.391636    6366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:28 kubernetes-upgrade-231829 kubelet[6366]: E1101 23:23:28.391636    6366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.961156  185407 out.go:239]   Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6377]: E1101 23:23:29.150112    6377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6377]: E1101 23:23:29.150112    6377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.961163  185407 out.go:239]   Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6388]: E1101 23:23:29.890110    6388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6388]: E1101 23:23:29.890110    6388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:30.961172  185407 out.go:239]   Nov 01 23:23:30 kubernetes-upgrade-231829 kubelet[6401]: E1101 23:23:30.648807    6401 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:30 kubernetes-upgrade-231829 kubelet[6401]: E1101 23:23:30.648807    6401 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:30.961178  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:30.961185  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:23:40.962426  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:23:41.094261  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:23:41.094336  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:23:41.118381  185407 cri.go:87] found id: ""
	I1101 23:23:41.118413  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.118422  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:23:41.118430  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:23:41.118489  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:23:41.143324  185407 cri.go:87] found id: ""
	I1101 23:23:41.143350  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.143357  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:23:41.143365  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:23:41.143442  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:23:41.165970  185407 cri.go:87] found id: ""
	I1101 23:23:41.165998  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.166005  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:23:41.166011  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:23:41.166049  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:23:41.188260  185407 cri.go:87] found id: ""
	I1101 23:23:41.188283  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.188289  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:23:41.188295  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:23:41.188338  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:23:41.216324  185407 cri.go:87] found id: ""
	I1101 23:23:41.216354  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.216361  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:23:41.216367  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:23:41.216417  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:23:41.239550  185407 cri.go:87] found id: ""
	I1101 23:23:41.239580  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.239589  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:23:41.239598  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:23:41.239654  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:23:41.262434  185407 cri.go:87] found id: ""
	I1101 23:23:41.262459  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.262465  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:23:41.262472  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:23:41.262525  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:23:41.284458  185407 cri.go:87] found id: ""
	I1101 23:23:41.284484  185407 logs.go:274] 0 containers: []
	W1101 23:23:41.284490  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:23:41.284499  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:23:41.284513  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:23:41.336852  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:23:41.336881  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:23:41.336894  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:23:41.374874  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:23:41.374916  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:23:41.400450  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:23:41.400483  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:23:41.416572  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:51 kubernetes-upgrade-231829 kubelet[5416]: E1101 23:22:51.638230    5416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.416973  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:52 kubernetes-upgrade-231829 kubelet[5427]: E1101 23:22:52.406197    5427 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.417413  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5437]: E1101 23:22:53.147831    5437 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.417837  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:53 kubernetes-upgrade-231829 kubelet[5447]: E1101 23:22:53.906690    5447 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.418214  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:54 kubernetes-upgrade-231829 kubelet[5459]: E1101 23:22:54.644411    5459 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.418592  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:55 kubernetes-upgrade-231829 kubelet[5469]: E1101 23:22:55.389840    5469 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.418964  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5479]: E1101 23:22:56.149894    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.419344  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:56 kubernetes-upgrade-231829 kubelet[5489]: E1101 23:22:56.986120    5489 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.419759  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:57 kubernetes-upgrade-231829 kubelet[5501]: E1101 23:22:57.664579    5501 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.420142  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:58 kubernetes-upgrade-231829 kubelet[5511]: E1101 23:22:58.390245    5511 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.420577  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5525]: E1101 23:22:59.148971    5525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.421000  185407 logs.go:138] Found kubelet problem: Nov 01 23:22:59 kubernetes-upgrade-231829 kubelet[5673]: E1101 23:22:59.893778    5673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.421372  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:00 kubernetes-upgrade-231829 kubelet[5685]: E1101 23:23:00.649173    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.421748  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:01 kubernetes-upgrade-231829 kubelet[5696]: E1101 23:23:01.390143    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.422129  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5707]: E1101 23:23:02.145486    5707 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.422512  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5719]: E1101 23:23:02.892256    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.422901  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:03 kubernetes-upgrade-231829 kubelet[5730]: E1101 23:23:03.646136    5730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.423366  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:04 kubernetes-upgrade-231829 kubelet[5741]: E1101 23:23:04.392954    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.423837  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5752]: E1101 23:23:05.148886    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.424231  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5763]: E1101 23:23:05.892201    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.424614  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.424986  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.425376  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.425759  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.426136  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.426542  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:10 kubernetes-upgrade-231829 kubelet[5969]: E1101 23:23:10.390409    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.426918  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5980]: E1101 23:23:11.140278    5980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.427291  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5991]: E1101 23:23:11.890955    5991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.427674  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:12 kubernetes-upgrade-231829 kubelet[6001]: E1101 23:23:12.638998    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.428054  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:13 kubernetes-upgrade-231829 kubelet[6012]: E1101 23:23:13.395636    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.428431  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6023]: E1101 23:23:14.138409    6023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.428802  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6034]: E1101 23:23:14.892741    6034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.429178  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:15 kubernetes-upgrade-231829 kubelet[6045]: E1101 23:23:15.637535    6045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.429556  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:16 kubernetes-upgrade-231829 kubelet[6056]: E1101 23:23:16.392743    6056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.429933  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.430303  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.430685  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.431081  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.431497  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.431992  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6260]: E1101 23:23:20.890908    6260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.432447  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:21 kubernetes-upgrade-231829 kubelet[6270]: E1101 23:23:21.644007    6270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.432835  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:22 kubernetes-upgrade-231829 kubelet[6281]: E1101 23:23:22.392197    6281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.433273  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6291]: E1101 23:23:23.139205    6291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.433668  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6302]: E1101 23:23:23.891184    6302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.434056  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:24 kubernetes-upgrade-231829 kubelet[6313]: E1101 23:23:24.637909    6313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.434429  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:25 kubernetes-upgrade-231829 kubelet[6324]: E1101 23:23:25.394445    6324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.434805  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6334]: E1101 23:23:26.138983    6334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.435206  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6345]: E1101 23:23:26.890412    6345 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.435623  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:27 kubernetes-upgrade-231829 kubelet[6356]: E1101 23:23:27.673240    6356 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.436005  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:28 kubernetes-upgrade-231829 kubelet[6366]: E1101 23:23:28.391636    6366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.436399  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6377]: E1101 23:23:29.150112    6377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.436820  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6388]: E1101 23:23:29.890110    6388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.437197  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:30 kubernetes-upgrade-231829 kubelet[6401]: E1101 23:23:30.648807    6401 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.437579  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:31 kubernetes-upgrade-231829 kubelet[6547]: E1101 23:23:31.391461    6547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.437985  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:32 kubernetes-upgrade-231829 kubelet[6558]: E1101 23:23:32.138247    6558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.438365  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:32 kubernetes-upgrade-231829 kubelet[6569]: E1101 23:23:32.889931    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.438742  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:33 kubernetes-upgrade-231829 kubelet[6582]: E1101 23:23:33.638747    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.439116  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:34 kubernetes-upgrade-231829 kubelet[6593]: E1101 23:23:34.391712    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.439523  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:35 kubernetes-upgrade-231829 kubelet[6604]: E1101 23:23:35.139754    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.439937  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:35 kubernetes-upgrade-231829 kubelet[6615]: E1101 23:23:35.889557    6615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.440365  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:36 kubernetes-upgrade-231829 kubelet[6626]: E1101 23:23:36.638235    6626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.440932  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:37 kubernetes-upgrade-231829 kubelet[6637]: E1101 23:23:37.423731    6637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.441395  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6648]: E1101 23:23:38.137170    6648 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.441759  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6658]: E1101 23:23:38.892229    6658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.442142  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:39 kubernetes-upgrade-231829 kubelet[6670]: E1101 23:23:39.638896    6670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.442516  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:40 kubernetes-upgrade-231829 kubelet[6681]: E1101 23:23:40.391128    6681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.442881  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6694]: E1101 23:23:41.144674    6694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:41.443011  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:23:41.443028  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:23:41.458793  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:41.458818  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:23:41.458923  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:23:41.458941  185407 out.go:239]   Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6648]: E1101 23:23:38.137170    6648 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6648]: E1101 23:23:38.137170    6648 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.458955  185407 out.go:239]   Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6658]: E1101 23:23:38.892229    6658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6658]: E1101 23:23:38.892229    6658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.458969  185407 out.go:239]   Nov 01 23:23:39 kubernetes-upgrade-231829 kubelet[6670]: E1101 23:23:39.638896    6670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:39 kubernetes-upgrade-231829 kubelet[6670]: E1101 23:23:39.638896    6670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.458981  185407 out.go:239]   Nov 01 23:23:40 kubernetes-upgrade-231829 kubelet[6681]: E1101 23:23:40.391128    6681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:40 kubernetes-upgrade-231829 kubelet[6681]: E1101 23:23:40.391128    6681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:41.458992  185407 out.go:239]   Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6694]: E1101 23:23:41.144674    6694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6694]: E1101 23:23:41.144674    6694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:41.458998  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:41.459012  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:23:51.459343  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:23:51.594270  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:23:51.594353  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:23:51.621183  185407 cri.go:87] found id: ""
	I1101 23:23:51.621211  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.621219  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:23:51.621227  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:23:51.621282  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:23:51.648224  185407 cri.go:87] found id: ""
	I1101 23:23:51.648248  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.648255  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:23:51.648260  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:23:51.648299  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:23:51.671558  185407 cri.go:87] found id: ""
	I1101 23:23:51.671587  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.671595  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:23:51.671603  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:23:51.671650  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:23:51.694633  185407 cri.go:87] found id: ""
	I1101 23:23:51.694662  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.694671  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:23:51.694679  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:23:51.694722  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:23:51.717051  185407 cri.go:87] found id: ""
	I1101 23:23:51.717073  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.717081  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:23:51.717088  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:23:51.717138  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:23:51.741232  185407 cri.go:87] found id: ""
	I1101 23:23:51.741260  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.741269  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:23:51.741277  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:23:51.741326  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:23:51.766916  185407 cri.go:87] found id: ""
	I1101 23:23:51.766945  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.766954  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:23:51.766962  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:23:51.767015  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:23:51.790881  185407 cri.go:87] found id: ""
	I1101 23:23:51.790909  185407 logs.go:274] 0 containers: []
	W1101 23:23:51.790915  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:23:51.790924  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:23:51.790933  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:23:51.806753  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5707]: E1101 23:23:02.145486    5707 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.807167  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:02 kubernetes-upgrade-231829 kubelet[5719]: E1101 23:23:02.892256    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.807622  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:03 kubernetes-upgrade-231829 kubelet[5730]: E1101 23:23:03.646136    5730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.808011  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:04 kubernetes-upgrade-231829 kubelet[5741]: E1101 23:23:04.392954    5741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.808379  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5752]: E1101 23:23:05.148886    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.808742  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:05 kubernetes-upgrade-231829 kubelet[5763]: E1101 23:23:05.892201    5763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.809103  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:06 kubernetes-upgrade-231829 kubelet[5774]: E1101 23:23:06.648789    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.809478  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:07 kubernetes-upgrade-231829 kubelet[5785]: E1101 23:23:07.395001    5785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.809833  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5797]: E1101 23:23:08.155765    5797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.810185  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:08 kubernetes-upgrade-231829 kubelet[5807]: E1101 23:23:08.892662    5807 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.810532  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:09 kubernetes-upgrade-231829 kubelet[5820]: E1101 23:23:09.644378    5820 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.810887  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:10 kubernetes-upgrade-231829 kubelet[5969]: E1101 23:23:10.390409    5969 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.811238  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5980]: E1101 23:23:11.140278    5980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.811646  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:11 kubernetes-upgrade-231829 kubelet[5991]: E1101 23:23:11.890955    5991 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.812074  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:12 kubernetes-upgrade-231829 kubelet[6001]: E1101 23:23:12.638998    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.812543  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:13 kubernetes-upgrade-231829 kubelet[6012]: E1101 23:23:13.395636    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.813184  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6023]: E1101 23:23:14.138409    6023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.813829  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6034]: E1101 23:23:14.892741    6034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.814473  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:15 kubernetes-upgrade-231829 kubelet[6045]: E1101 23:23:15.637535    6045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.815114  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:16 kubernetes-upgrade-231829 kubelet[6056]: E1101 23:23:16.392743    6056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.815555  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.815907  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.816259  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.816601  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.816946  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.817296  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6260]: E1101 23:23:20.890908    6260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.817638  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:21 kubernetes-upgrade-231829 kubelet[6270]: E1101 23:23:21.644007    6270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.817983  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:22 kubernetes-upgrade-231829 kubelet[6281]: E1101 23:23:22.392197    6281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.818390  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6291]: E1101 23:23:23.139205    6291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.818815  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6302]: E1101 23:23:23.891184    6302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.819172  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:24 kubernetes-upgrade-231829 kubelet[6313]: E1101 23:23:24.637909    6313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.819541  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:25 kubernetes-upgrade-231829 kubelet[6324]: E1101 23:23:25.394445    6324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.819895  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6334]: E1101 23:23:26.138983    6334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.820248  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6345]: E1101 23:23:26.890412    6345 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.820593  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:27 kubernetes-upgrade-231829 kubelet[6356]: E1101 23:23:27.673240    6356 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.820940  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:28 kubernetes-upgrade-231829 kubelet[6366]: E1101 23:23:28.391636    6366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.821297  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6377]: E1101 23:23:29.150112    6377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.821649  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6388]: E1101 23:23:29.890110    6388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.822002  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:30 kubernetes-upgrade-231829 kubelet[6401]: E1101 23:23:30.648807    6401 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.822350  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:31 kubernetes-upgrade-231829 kubelet[6547]: E1101 23:23:31.391461    6547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.822708  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:32 kubernetes-upgrade-231829 kubelet[6558]: E1101 23:23:32.138247    6558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.823061  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:32 kubernetes-upgrade-231829 kubelet[6569]: E1101 23:23:32.889931    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.823423  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:33 kubernetes-upgrade-231829 kubelet[6582]: E1101 23:23:33.638747    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.823769  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:34 kubernetes-upgrade-231829 kubelet[6593]: E1101 23:23:34.391712    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.824119  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:35 kubernetes-upgrade-231829 kubelet[6604]: E1101 23:23:35.139754    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.824517  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:35 kubernetes-upgrade-231829 kubelet[6615]: E1101 23:23:35.889557    6615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.825125  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:36 kubernetes-upgrade-231829 kubelet[6626]: E1101 23:23:36.638235    6626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.825604  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:37 kubernetes-upgrade-231829 kubelet[6637]: E1101 23:23:37.423731    6637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.825959  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6648]: E1101 23:23:38.137170    6648 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.826309  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6658]: E1101 23:23:38.892229    6658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.826706  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:39 kubernetes-upgrade-231829 kubelet[6670]: E1101 23:23:39.638896    6670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.827064  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:40 kubernetes-upgrade-231829 kubelet[6681]: E1101 23:23:40.391128    6681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.827431  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6694]: E1101 23:23:41.144674    6694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.827819  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6841]: E1101 23:23:41.890788    6841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.828182  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:42 kubernetes-upgrade-231829 kubelet[6853]: E1101 23:23:42.639437    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.828527  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:43 kubernetes-upgrade-231829 kubelet[6863]: E1101 23:23:43.391821    6863 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.828869  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:44 kubernetes-upgrade-231829 kubelet[6874]: E1101 23:23:44.137983    6874 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.829218  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:44 kubernetes-upgrade-231829 kubelet[6884]: E1101 23:23:44.890376    6884 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.829564  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:45 kubernetes-upgrade-231829 kubelet[6896]: E1101 23:23:45.640210    6896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.829908  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:46 kubernetes-upgrade-231829 kubelet[6907]: E1101 23:23:46.391146    6907 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.830296  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:47 kubernetes-upgrade-231829 kubelet[6918]: E1101 23:23:47.138202    6918 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.830674  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:47 kubernetes-upgrade-231829 kubelet[6929]: E1101 23:23:47.901664    6929 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.831030  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:48 kubernetes-upgrade-231829 kubelet[6940]: E1101 23:23:48.644456    6940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.831386  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:49 kubernetes-upgrade-231829 kubelet[6950]: E1101 23:23:49.393096    6950 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.831752  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6961]: E1101 23:23:50.138861    6961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.832115  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6972]: E1101 23:23:50.892025    6972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.832472  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:51 kubernetes-upgrade-231829 kubelet[6985]: E1101 23:23:51.643933    6985 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:51.832622  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:23:51.832639  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:23:51.850167  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:23:51.850194  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:23:51.906061  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:23:51.906084  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:23:51.906096  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:23:51.948151  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:23:51.948182  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:23:51.973703  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:51.973734  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:23:51.973859  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:23:51.973889  185407 out.go:239]   Nov 01 23:23:48 kubernetes-upgrade-231829 kubelet[6940]: E1101 23:23:48.644456    6940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:48 kubernetes-upgrade-231829 kubelet[6940]: E1101 23:23:48.644456    6940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.973898  185407 out.go:239]   Nov 01 23:23:49 kubernetes-upgrade-231829 kubelet[6950]: E1101 23:23:49.393096    6950 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:49 kubernetes-upgrade-231829 kubelet[6950]: E1101 23:23:49.393096    6950 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.973905  185407 out.go:239]   Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6961]: E1101 23:23:50.138861    6961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6961]: E1101 23:23:50.138861    6961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.973912  185407 out.go:239]   Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6972]: E1101 23:23:50.892025    6972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6972]: E1101 23:23:50.892025    6972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:23:51.973919  185407 out.go:239]   Nov 01 23:23:51 kubernetes-upgrade-231829 kubelet[6985]: E1101 23:23:51.643933    6985 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:51 kubernetes-upgrade-231829 kubelet[6985]: E1101 23:23:51.643933    6985 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:23:51.973925  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:23:51.973933  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:24:01.974428  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:24:02.094522  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:24:02.094601  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:24:02.119233  185407 cri.go:87] found id: ""
	I1101 23:24:02.119261  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.119268  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:24:02.119275  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:24:02.119327  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:24:02.143647  185407 cri.go:87] found id: ""
	I1101 23:24:02.143678  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.143688  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:24:02.143696  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:24:02.143739  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:24:02.167218  185407 cri.go:87] found id: ""
	I1101 23:24:02.167246  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.167258  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:24:02.167266  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:24:02.167316  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:24:02.191107  185407 cri.go:87] found id: ""
	I1101 23:24:02.191133  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.191139  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:24:02.191144  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:24:02.191192  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:24:02.214854  185407 cri.go:87] found id: ""
	I1101 23:24:02.214880  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.214888  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:24:02.214898  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:24:02.214951  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:24:02.237108  185407 cri.go:87] found id: ""
	I1101 23:24:02.237137  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.237145  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:24:02.237154  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:24:02.237202  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:24:02.259229  185407 cri.go:87] found id: ""
	I1101 23:24:02.259257  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.259266  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:24:02.259273  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:24:02.259342  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:24:02.283449  185407 cri.go:87] found id: ""
	I1101 23:24:02.283478  185407 logs.go:274] 0 containers: []
	W1101 23:24:02.283488  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:24:02.283499  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:24:02.283510  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:24:02.339277  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:24:02.339304  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:24:02.339316  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:24:02.374740  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:24:02.374770  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 23:24:02.399381  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:24:02.399426  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:24:02.416092  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:12 kubernetes-upgrade-231829 kubelet[6001]: E1101 23:23:12.638998    6001 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.416481  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:13 kubernetes-upgrade-231829 kubelet[6012]: E1101 23:23:13.395636    6012 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.416839  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6023]: E1101 23:23:14.138409    6023 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.417264  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:14 kubernetes-upgrade-231829 kubelet[6034]: E1101 23:23:14.892741    6034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.417842  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:15 kubernetes-upgrade-231829 kubelet[6045]: E1101 23:23:15.637535    6045 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.418312  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:16 kubernetes-upgrade-231829 kubelet[6056]: E1101 23:23:16.392743    6056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.418713  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6067]: E1101 23:23:17.139858    6067 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.419081  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:17 kubernetes-upgrade-231829 kubelet[6079]: E1101 23:23:17.897706    6079 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.419468  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:18 kubernetes-upgrade-231829 kubelet[6090]: E1101 23:23:18.639457    6090 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.419851  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:19 kubernetes-upgrade-231829 kubelet[6101]: E1101 23:23:19.392562    6101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.420216  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6114]: E1101 23:23:20.149553    6114 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.420578  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:20 kubernetes-upgrade-231829 kubelet[6260]: E1101 23:23:20.890908    6260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.420957  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:21 kubernetes-upgrade-231829 kubelet[6270]: E1101 23:23:21.644007    6270 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.421330  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:22 kubernetes-upgrade-231829 kubelet[6281]: E1101 23:23:22.392197    6281 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.421692  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6291]: E1101 23:23:23.139205    6291 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.422048  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:23 kubernetes-upgrade-231829 kubelet[6302]: E1101 23:23:23.891184    6302 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.422413  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:24 kubernetes-upgrade-231829 kubelet[6313]: E1101 23:23:24.637909    6313 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.422776  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:25 kubernetes-upgrade-231829 kubelet[6324]: E1101 23:23:25.394445    6324 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.423144  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6334]: E1101 23:23:26.138983    6334 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.423539  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:26 kubernetes-upgrade-231829 kubelet[6345]: E1101 23:23:26.890412    6345 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.423927  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:27 kubernetes-upgrade-231829 kubelet[6356]: E1101 23:23:27.673240    6356 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.424293  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:28 kubernetes-upgrade-231829 kubelet[6366]: E1101 23:23:28.391636    6366 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.424725  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6377]: E1101 23:23:29.150112    6377 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.425113  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:29 kubernetes-upgrade-231829 kubelet[6388]: E1101 23:23:29.890110    6388 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.425483  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:30 kubernetes-upgrade-231829 kubelet[6401]: E1101 23:23:30.648807    6401 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.425844  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:31 kubernetes-upgrade-231829 kubelet[6547]: E1101 23:23:31.391461    6547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.426201  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:32 kubernetes-upgrade-231829 kubelet[6558]: E1101 23:23:32.138247    6558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.426570  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:32 kubernetes-upgrade-231829 kubelet[6569]: E1101 23:23:32.889931    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.426933  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:33 kubernetes-upgrade-231829 kubelet[6582]: E1101 23:23:33.638747    6582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.427298  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:34 kubernetes-upgrade-231829 kubelet[6593]: E1101 23:23:34.391712    6593 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.427722  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:35 kubernetes-upgrade-231829 kubelet[6604]: E1101 23:23:35.139754    6604 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.428092  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:35 kubernetes-upgrade-231829 kubelet[6615]: E1101 23:23:35.889557    6615 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.428456  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:36 kubernetes-upgrade-231829 kubelet[6626]: E1101 23:23:36.638235    6626 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.428848  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:37 kubernetes-upgrade-231829 kubelet[6637]: E1101 23:23:37.423731    6637 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.429218  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6648]: E1101 23:23:38.137170    6648 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.429589  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:38 kubernetes-upgrade-231829 kubelet[6658]: E1101 23:23:38.892229    6658 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.429958  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:39 kubernetes-upgrade-231829 kubelet[6670]: E1101 23:23:39.638896    6670 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.430317  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:40 kubernetes-upgrade-231829 kubelet[6681]: E1101 23:23:40.391128    6681 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.430689  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6694]: E1101 23:23:41.144674    6694 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.431054  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:41 kubernetes-upgrade-231829 kubelet[6841]: E1101 23:23:41.890788    6841 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.431437  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:42 kubernetes-upgrade-231829 kubelet[6853]: E1101 23:23:42.639437    6853 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.431809  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:43 kubernetes-upgrade-231829 kubelet[6863]: E1101 23:23:43.391821    6863 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.432207  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:44 kubernetes-upgrade-231829 kubelet[6874]: E1101 23:23:44.137983    6874 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.432574  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:44 kubernetes-upgrade-231829 kubelet[6884]: E1101 23:23:44.890376    6884 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.432938  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:45 kubernetes-upgrade-231829 kubelet[6896]: E1101 23:23:45.640210    6896 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.433286  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:46 kubernetes-upgrade-231829 kubelet[6907]: E1101 23:23:46.391146    6907 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.433647  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:47 kubernetes-upgrade-231829 kubelet[6918]: E1101 23:23:47.138202    6918 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.434002  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:47 kubernetes-upgrade-231829 kubelet[6929]: E1101 23:23:47.901664    6929 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.434344  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:48 kubernetes-upgrade-231829 kubelet[6940]: E1101 23:23:48.644456    6940 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.434692  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:49 kubernetes-upgrade-231829 kubelet[6950]: E1101 23:23:49.393096    6950 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.435040  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6961]: E1101 23:23:50.138861    6961 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.435387  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:50 kubernetes-upgrade-231829 kubelet[6972]: E1101 23:23:50.892025    6972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.435792  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:51 kubernetes-upgrade-231829 kubelet[6985]: E1101 23:23:51.643933    6985 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.436149  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:52 kubernetes-upgrade-231829 kubelet[7131]: E1101 23:23:52.393303    7131 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.436496  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:53 kubernetes-upgrade-231829 kubelet[7142]: E1101 23:23:53.142288    7142 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.436844  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:53 kubernetes-upgrade-231829 kubelet[7153]: E1101 23:23:53.889178    7153 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.437195  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:54 kubernetes-upgrade-231829 kubelet[7164]: E1101 23:23:54.638641    7164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.437545  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:55 kubernetes-upgrade-231829 kubelet[7175]: E1101 23:23:55.390892    7175 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.437911  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:56 kubernetes-upgrade-231829 kubelet[7186]: E1101 23:23:56.139274    7186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.438252  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:56 kubernetes-upgrade-231829 kubelet[7196]: E1101 23:23:56.893742    7196 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.438600  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:57 kubernetes-upgrade-231829 kubelet[7208]: E1101 23:23:57.640679    7208 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.438977  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:58 kubernetes-upgrade-231829 kubelet[7219]: E1101 23:23:58.391692    7219 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.439322  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:59 kubernetes-upgrade-231829 kubelet[7230]: E1101 23:23:59.137999    7230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.439698  185407 logs.go:138] Found kubelet problem: Nov 01 23:23:59 kubernetes-upgrade-231829 kubelet[7241]: E1101 23:23:59.890236    7241 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.440044  185407 logs.go:138] Found kubelet problem: Nov 01 23:24:00 kubernetes-upgrade-231829 kubelet[7252]: E1101 23:24:00.641405    7252 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.440402  185407 logs.go:138] Found kubelet problem: Nov 01 23:24:01 kubernetes-upgrade-231829 kubelet[7263]: E1101 23:24:01.390637    7263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.440754  185407 logs.go:138] Found kubelet problem: Nov 01 23:24:02 kubernetes-upgrade-231829 kubelet[7276]: E1101 23:24:02.144306    7276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:24:02.440874  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:24:02.440889  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:24:02.456653  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:24:02.456677  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1101 23:24:02.456788  185407 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1101 23:24:02.456804  185407 out.go:239]   Nov 01 23:23:59 kubernetes-upgrade-231829 kubelet[7230]: E1101 23:23:59.137999    7230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:59 kubernetes-upgrade-231829 kubelet[7230]: E1101 23:23:59.137999    7230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.456812  185407 out.go:239]   Nov 01 23:23:59 kubernetes-upgrade-231829 kubelet[7241]: E1101 23:23:59.890236    7241 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:23:59 kubernetes-upgrade-231829 kubelet[7241]: E1101 23:23:59.890236    7241 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.456819  185407 out.go:239]   Nov 01 23:24:00 kubernetes-upgrade-231829 kubelet[7252]: E1101 23:24:00.641405    7252 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:24:00 kubernetes-upgrade-231829 kubelet[7252]: E1101 23:24:00.641405    7252 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.456829  185407 out.go:239]   Nov 01 23:24:01 kubernetes-upgrade-231829 kubelet[7263]: E1101 23:24:01.390637    7263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:24:01 kubernetes-upgrade-231829 kubelet[7263]: E1101 23:24:01.390637    7263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:24:02.456834  185407 out.go:239]   Nov 01 23:24:02 kubernetes-upgrade-231829 kubelet[7276]: E1101 23:24:02.144306    7276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Nov 01 23:24:02 kubernetes-upgrade-231829 kubelet[7276]: E1101 23:24:02.144306    7276 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:24:02.456838  185407 out.go:309] Setting ErrFile to fd 2...
	I1101 23:24:02.456843  185407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:24:12.457924  185407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:24:12.467007  185407 kubeadm.go:631] restartCluster took 4m10.271755367s
	W1101 23:24:12.467167  185407 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1101 23:24:12.467203  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:24:14.374078  185407 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.906855739s)
	I1101 23:24:14.374135  185407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:24:14.384118  185407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:24:14.391307  185407 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:24:14.391352  185407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:24:14.398260  185407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:24:14.398295  185407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:24:14.440547  185407 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1101 23:24:14.440664  185407 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:24:14.469720  185407 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:24:14.469817  185407 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:24:14.469882  185407 kubeadm.go:317] OS: Linux
	I1101 23:24:14.469973  185407 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:24:14.470068  185407 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:24:14.470153  185407 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:24:14.470227  185407 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:24:14.470288  185407 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:24:14.470348  185407 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:24:14.470420  185407 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:24:14.470501  185407 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:24:14.470546  185407 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:24:14.538019  185407 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 23:24:14.538146  185407 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 23:24:14.538247  185407 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 23:24:14.656653  185407 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 23:24:14.658637  185407 out.go:204]   - Generating certificates and keys ...
	I1101 23:24:14.658766  185407 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 23:24:14.658855  185407 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 23:24:14.658962  185407 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 23:24:14.659055  185407 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 23:24:14.659155  185407 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 23:24:14.659227  185407 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 23:24:14.659324  185407 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 23:24:14.663662  185407 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 23:24:14.663756  185407 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 23:24:14.663848  185407 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 23:24:14.663910  185407 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 23:24:14.663999  185407 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 23:24:14.811235  185407 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 23:24:14.960652  185407 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 23:24:15.098758  185407 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 23:24:15.228664  185407 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 23:24:15.262273  185407 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 23:24:15.264686  185407 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 23:24:15.264755  185407 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1101 23:24:15.369754  185407 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 23:24:15.372499  185407 out.go:204]   - Booting up control plane ...
	I1101 23:24:15.372672  185407 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 23:24:15.372785  185407 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 23:24:15.373510  185407 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 23:24:15.374417  185407 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 23:24:15.376716  185407 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 23:24:55.377512  185407 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 23:24:55.377868  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:24:55.378125  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:25:00.379048  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:25:00.379298  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:25:10.379826  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:25:10.380033  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:25:30.380818  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:25:30.380989  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:26:10.382121  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:26:10.382387  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:26:10.382421  185407 kubeadm.go:317] 
	I1101 23:26:10.382471  185407 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 23:26:10.382539  185407 kubeadm.go:317] 	timed out waiting for the condition
	I1101 23:26:10.382549  185407 kubeadm.go:317] 
	I1101 23:26:10.382594  185407 kubeadm.go:317] This error is likely caused by:
	I1101 23:26:10.382648  185407 kubeadm.go:317] 	- The kubelet is not running
	I1101 23:26:10.382761  185407 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 23:26:10.382773  185407 kubeadm.go:317] 
	I1101 23:26:10.382926  185407 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 23:26:10.382989  185407 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 23:26:10.383026  185407 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 23:26:10.383036  185407 kubeadm.go:317] 
	I1101 23:26:10.383144  185407 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 23:26:10.383256  185407 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1101 23:26:10.383380  185407 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1101 23:26:10.383558  185407 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1101 23:26:10.383665  185407 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 23:26:10.383785  185407 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1101 23:26:10.385232  185407 kubeadm.go:317] W1101 23:24:14.435357    8598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:26:10.385412  185407 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:26:10.385524  185407 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:26:10.385646  185407 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 23:26:10.385766  185407 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1101 23:26:10.386077  185407 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:24:14.435357    8598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:24:14.435357    8598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1101 23:26:10.386152  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:26:12.204518  185407 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.818333969s)
	I1101 23:26:12.204584  185407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:26:12.213991  185407 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:26:12.214034  185407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:26:12.220783  185407 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:26:12.220821  185407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:26:12.257554  185407 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1101 23:26:12.257622  185407 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:26:12.287290  185407 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:26:12.287364  185407 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:26:12.287428  185407 kubeadm.go:317] OS: Linux
	I1101 23:26:12.287504  185407 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:26:12.287633  185407 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:26:12.287706  185407 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:26:12.287768  185407 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:26:12.287843  185407 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:26:12.287930  185407 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:26:12.287998  185407 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:26:12.288078  185407 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:26:12.288148  185407 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:26:12.350979  185407 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 23:26:12.351126  185407 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 23:26:12.351250  185407 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 23:26:12.472800  185407 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 23:26:12.476070  185407 out.go:204]   - Generating certificates and keys ...
	I1101 23:26:12.476196  185407 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 23:26:12.476286  185407 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 23:26:12.476437  185407 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 23:26:12.476561  185407 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 23:26:12.476676  185407 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 23:26:12.476746  185407 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 23:26:12.476824  185407 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 23:26:12.476905  185407 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 23:26:12.477000  185407 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 23:26:12.477079  185407 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 23:26:12.477139  185407 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 23:26:12.477205  185407 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 23:26:12.590345  185407 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 23:26:12.737256  185407 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 23:26:12.922446  185407 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 23:26:13.038363  185407 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 23:26:13.050818  185407 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 23:26:13.051710  185407 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 23:26:13.051828  185407 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1101 23:26:13.135657  185407 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 23:26:13.138028  185407 out.go:204]   - Booting up control plane ...
	I1101 23:26:13.138188  185407 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 23:26:13.138327  185407 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 23:26:13.139355  185407 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 23:26:13.140088  185407 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 23:26:13.143128  185407 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 23:26:53.143918  185407 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1101 23:26:53.144325  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:26:53.144535  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:26:58.145551  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:26:58.145826  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:27:08.146443  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:27:08.146747  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:27:28.147884  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:27:28.148138  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:28:08.148535  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:28:08.148824  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:28:08.148865  185407 kubeadm.go:317] 
	I1101 23:28:08.148928  185407 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 23:28:08.148993  185407 kubeadm.go:317] 	timed out waiting for the condition
	I1101 23:28:08.149006  185407 kubeadm.go:317] 
	I1101 23:28:08.149056  185407 kubeadm.go:317] This error is likely caused by:
	I1101 23:28:08.149127  185407 kubeadm.go:317] 	- The kubelet is not running
	I1101 23:28:08.149289  185407 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 23:28:08.149303  185407 kubeadm.go:317] 
	I1101 23:28:08.149423  185407 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 23:28:08.149474  185407 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 23:28:08.149511  185407 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 23:28:08.149521  185407 kubeadm.go:317] 
	I1101 23:28:08.149672  185407 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 23:28:08.149791  185407 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1101 23:28:08.149924  185407 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1101 23:28:08.150106  185407 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1101 23:28:08.150231  185407 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 23:28:08.150376  185407 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1101 23:28:08.151585  185407 kubeadm.go:317] W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:28:08.151824  185407 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:28:08.151960  185407 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:28:08.152103  185407 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 23:28:08.152169  185407 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 23:28:08.152244  185407 kubeadm.go:398] StartCluster complete in 8m5.994430898s
	I1101 23:28:08.152280  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:28:08.152328  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:28:08.176225  185407 cri.go:87] found id: ""
	I1101 23:28:08.176251  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.176260  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:28:08.176270  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:28:08.176327  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:28:08.199234  185407 cri.go:87] found id: ""
	I1101 23:28:08.199258  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.199266  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:28:08.199274  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:28:08.199322  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:28:08.221154  185407 cri.go:87] found id: ""
	I1101 23:28:08.221176  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.221183  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:28:08.221188  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:28:08.221230  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:28:08.243882  185407 cri.go:87] found id: ""
	I1101 23:28:08.243906  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.243914  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:28:08.243920  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:28:08.243966  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:28:08.265488  185407 cri.go:87] found id: ""
	I1101 23:28:08.265514  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.265520  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:28:08.265526  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:28:08.265563  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:28:08.290043  185407 cri.go:87] found id: ""
	I1101 23:28:08.290075  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.290084  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:28:08.290092  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:28:08.290143  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:28:08.317739  185407 cri.go:87] found id: ""
	I1101 23:28:08.317770  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.317780  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:28:08.317789  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:28:08.317844  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:28:08.345570  185407 cri.go:87] found id: ""
	I1101 23:28:08.345602  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.345612  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:28:08.345623  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:28:08.345637  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:28:08.367361  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:18 kubernetes-upgrade-231829 kubelet[12586]: E1101 23:27:18.390338   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.367827  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12597]: E1101 23:27:19.139874   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.368328  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12609]: E1101 23:27:19.896066   12609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.368905  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:20 kubernetes-upgrade-231829 kubelet[12620]: E1101 23:27:20.636659   12620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.369473  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:21 kubernetes-upgrade-231829 kubelet[12631]: E1101 23:27:21.400605   12631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.369963  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:22 kubernetes-upgrade-231829 kubelet[12642]: E1101 23:27:22.138520   12642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.370402  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:22 kubernetes-upgrade-231829 kubelet[12653]: E1101 23:27:22.897498   12653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.370772  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:23 kubernetes-upgrade-231829 kubelet[12663]: E1101 23:27:23.643138   12663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.371249  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:24 kubernetes-upgrade-231829 kubelet[12673]: E1101 23:27:24.396966   12673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.371861  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:25 kubernetes-upgrade-231829 kubelet[12684]: E1101 23:27:25.154324   12684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.372373  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:25 kubernetes-upgrade-231829 kubelet[12695]: E1101 23:27:25.891181   12695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.372883  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:26 kubernetes-upgrade-231829 kubelet[12706]: E1101 23:27:26.641044   12706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.373477  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:27 kubernetes-upgrade-231829 kubelet[12717]: E1101 23:27:27.394650   12717 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.373996  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:28 kubernetes-upgrade-231829 kubelet[12727]: E1101 23:27:28.143993   12727 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.374535  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:28 kubernetes-upgrade-231829 kubelet[12737]: E1101 23:27:28.901435   12737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.374984  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:29 kubernetes-upgrade-231829 kubelet[12747]: E1101 23:27:29.650465   12747 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.375603  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:30 kubernetes-upgrade-231829 kubelet[12758]: E1101 23:27:30.402926   12758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.376122  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:31 kubernetes-upgrade-231829 kubelet[12769]: E1101 23:27:31.145486   12769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.376740  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:31 kubernetes-upgrade-231829 kubelet[12780]: E1101 23:27:31.891079   12780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.377279  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:32 kubernetes-upgrade-231829 kubelet[12791]: E1101 23:27:32.639605   12791 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.377873  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:33 kubernetes-upgrade-231829 kubelet[12802]: E1101 23:27:33.392107   12802 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.378310  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:34 kubernetes-upgrade-231829 kubelet[12813]: E1101 23:27:34.138870   12813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.378752  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:34 kubernetes-upgrade-231829 kubelet[12824]: E1101 23:27:34.893846   12824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.379302  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:35 kubernetes-upgrade-231829 kubelet[12835]: E1101 23:27:35.651312   12835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.379774  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:36 kubernetes-upgrade-231829 kubelet[12845]: E1101 23:27:36.391392   12845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.380173  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:37 kubernetes-upgrade-231829 kubelet[12856]: E1101 23:27:37.143847   12856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.380643  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:37 kubernetes-upgrade-231829 kubelet[12867]: E1101 23:27:37.889713   12867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.381061  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:38 kubernetes-upgrade-231829 kubelet[12878]: E1101 23:27:38.641845   12878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.381528  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:39 kubernetes-upgrade-231829 kubelet[12889]: E1101 23:27:39.394820   12889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.382097  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:40 kubernetes-upgrade-231829 kubelet[12900]: E1101 23:27:40.145508   12900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.382534  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:40 kubernetes-upgrade-231829 kubelet[12911]: E1101 23:27:40.890940   12911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.382956  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:41 kubernetes-upgrade-231829 kubelet[12922]: E1101 23:27:41.640999   12922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.383572  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:42 kubernetes-upgrade-231829 kubelet[12935]: E1101 23:27:42.391366   12935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.384065  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:43 kubernetes-upgrade-231829 kubelet[12946]: E1101 23:27:43.151607   12946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.384598  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:43 kubernetes-upgrade-231829 kubelet[12956]: E1101 23:27:43.890804   12956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.385231  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:44 kubernetes-upgrade-231829 kubelet[12966]: E1101 23:27:44.650554   12966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.385855  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:45 kubernetes-upgrade-231829 kubelet[12977]: E1101 23:27:45.391536   12977 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.386299  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:46 kubernetes-upgrade-231829 kubelet[12988]: E1101 23:27:46.161335   12988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.386693  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:46 kubernetes-upgrade-231829 kubelet[12998]: E1101 23:27:46.888241   12998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.387237  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:47 kubernetes-upgrade-231829 kubelet[13009]: E1101 23:27:47.638878   13009 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.387850  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:48 kubernetes-upgrade-231829 kubelet[13020]: E1101 23:27:48.394035   13020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.388460  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:49 kubernetes-upgrade-231829 kubelet[13031]: E1101 23:27:49.143774   13031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.388919  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:49 kubernetes-upgrade-231829 kubelet[13041]: E1101 23:27:49.896404   13041 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.389487  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:50 kubernetes-upgrade-231829 kubelet[13052]: E1101 23:27:50.639330   13052 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.390042  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:51 kubernetes-upgrade-231829 kubelet[13063]: E1101 23:27:51.392211   13063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.390652  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:52 kubernetes-upgrade-231829 kubelet[13074]: E1101 23:27:52.139809   13074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.391164  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:52 kubernetes-upgrade-231829 kubelet[13085]: E1101 23:27:52.890814   13085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.391637  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:53 kubernetes-upgrade-231829 kubelet[13096]: E1101 23:27:53.638640   13096 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.392064  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:54 kubernetes-upgrade-231829 kubelet[13108]: E1101 23:27:54.392585   13108 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.392544  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:55 kubernetes-upgrade-231829 kubelet[13119]: E1101 23:27:55.141081   13119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.393148  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:55 kubernetes-upgrade-231829 kubelet[13130]: E1101 23:27:55.890164   13130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.393703  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:56 kubernetes-upgrade-231829 kubelet[13141]: E1101 23:27:56.639793   13141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.394140  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:57 kubernetes-upgrade-231829 kubelet[13152]: E1101 23:27:57.392278   13152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.394646  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:58 kubernetes-upgrade-231829 kubelet[13163]: E1101 23:27:58.140104   13163 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.395213  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:58 kubernetes-upgrade-231829 kubelet[13174]: E1101 23:27:58.889267   13174 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.395792  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:59 kubernetes-upgrade-231829 kubelet[13186]: E1101 23:27:59.648961   13186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.396392  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:00 kubernetes-upgrade-231829 kubelet[13197]: E1101 23:28:00.394164   13197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.396869  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:01 kubernetes-upgrade-231829 kubelet[13209]: E1101 23:28:01.147995   13209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.397406  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:01 kubernetes-upgrade-231829 kubelet[13220]: E1101 23:28:01.893499   13220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.397752  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:02 kubernetes-upgrade-231829 kubelet[13232]: E1101 23:28:02.650517   13232 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.398105  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:03 kubernetes-upgrade-231829 kubelet[13242]: E1101 23:28:03.405628   13242 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.398458  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:04 kubernetes-upgrade-231829 kubelet[13252]: E1101 23:28:04.143793   13252 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.398807  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:04 kubernetes-upgrade-231829 kubelet[13262]: E1101 23:28:04.894747   13262 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.399162  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:05 kubernetes-upgrade-231829 kubelet[13273]: E1101 23:28:05.638910   13273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.399536  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:06 kubernetes-upgrade-231829 kubelet[13283]: E1101 23:28:06.389853   13283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.399896  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:07 kubernetes-upgrade-231829 kubelet[13295]: E1101 23:28:07.139644   13295 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.400255  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:07 kubernetes-upgrade-231829 kubelet[13305]: E1101 23:28:07.888182   13305 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.400376  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:28:08.400390  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:28:08.419266  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:28:08.419303  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:28:08.481115  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:28:08.481145  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:28:08.481159  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:28:08.548695  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:28:08.548739  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 23:28:08.576173  185407 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 23:28:08.576217  185407 out.go:239] * 
	* 
	W1101 23:28:08.576460  185407 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:28:08.576497  185407 out.go:239] * 
	* 
	W1101 23:28:08.577815  185407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 23:28:08.580409  185407 out.go:177] X Problems detected in kubelet:
	I1101 23:28:08.581740  185407 out.go:177]   Nov 01 23:27:18 kubernetes-upgrade-231829 kubelet[12586]: E1101 23:27:18.390338   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.583235  185407 out.go:177]   Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12597]: E1101 23:27:19.139874   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.584610  185407 out.go:177]   Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12609]: E1101 23:27:19.896066   12609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.588577  185407 out.go:177] 
	W1101 23:28:08.590415  185407 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:28:08.590549  185407 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 23:28:08.590624  185407 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 23:28:08.593189  185407 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-231829 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-231829 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-231829 version --output=json: exit status 1 (70.122147ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "25",
	    "gitVersion": "v1.25.3",
	    "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	    "gitTreeState": "clean",
	    "buildDate": "2022-10-12T10:57:26Z",
	    "goVersion": "go1.19.2",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.7"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2022-11-01 23:28:09.026444094 +0000 UTC m=+2598.162900549
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-231829
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-231829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1f7240c914f237fe6863ee48cfd1e87fb017cab0d4b12851e46aa05db73935d",
	        "Created": "2022-11-01T23:18:35.335982225Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-01T23:19:25.274016685Z",
	            "FinishedAt": "2022-11-01T23:19:20.354242152Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/f1f7240c914f237fe6863ee48cfd1e87fb017cab0d4b12851e46aa05db73935d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1f7240c914f237fe6863ee48cfd1e87fb017cab0d4b12851e46aa05db73935d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1f7240c914f237fe6863ee48cfd1e87fb017cab0d4b12851e46aa05db73935d/hosts",
	        "LogPath": "/var/lib/docker/containers/f1f7240c914f237fe6863ee48cfd1e87fb017cab0d4b12851e46aa05db73935d/f1f7240c914f237fe6863ee48cfd1e87fb017cab0d4b12851e46aa05db73935d-json.log",
	        "Name": "/kubernetes-upgrade-231829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-231829:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-231829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e956291c12b29486a27a5c39fd53afa3174f4380097809c8a71397595abaa7d-init/diff:/var/lib/docker/overlay2/3304d2e292dd827b741fa7e7dfa0dd06c735a2abf2639025717eb96733168a33/diff:/var/lib/docker/overlay2/f66a2ec830111a507a160d2f7f58d1ab0df8159096f23d5da74ca81116f032a4/diff:/var/lib/docker/overlay2/58562370bf5535a09b5f3ac667ae66ace0239a84b1724c693027cd984380e69d/diff:/var/lib/docker/overlay2/ad70e4fabb7d3b3f908814730456a6f69256cb5bf3f6281cf2e1de2d9ad6e620/diff:/var/lib/docker/overlay2/372e614731843da3a6a8586e11682dd7031ded66b212170eab90ed3974b91656/diff:/var/lib/docker/overlay2/0d5e9529a6b310e7de135cb901fad0589f42c74f315a8d227b3f1058a0635d3a/diff:/var/lib/docker/overlay2/68e9f113391c7a1cb7cf63712d04a796653c1b7efd904081fd8696e3142066cb/diff:/var/lib/docker/overlay2/25d5a308de1516fe45d18cc8d3b35ae4e3de5999ad6bffc678475b1fa74ce54c/diff:/var/lib/docker/overlay2/4fbedef0e02e22b00c09b167edef3a01d1baaa6ae2581ce1816acceb7b82904f/diff:/var/lib/docker/overlay2/237634
e28f08af84128abf2ca5885d71bf5f916d63c6088eb178b0729931f43f/diff:/var/lib/docker/overlay2/c1e44e9be7cdbbc0eecc5b798955e90ab62ff8e89d859ab692d424b63f8db9a1/diff:/var/lib/docker/overlay2/945c70a7d8c420004bb39705628a454a575ae067a91da51362818da5f64779bc/diff:/var/lib/docker/overlay2/ed05d73c801ea52b22e058a7fa685c4412453d8e5f0af711d6c43dc75ea9f082/diff:/var/lib/docker/overlay2/4f5b59c087860f39c4b24105ac4677a11a5167aec2093628c48e263d18b25f68/diff:/var/lib/docker/overlay2/5535048bf0d8af7ed100e4121cd2d5d8b776a0155a6edccc3bea22e753d8597b/diff:/var/lib/docker/overlay2/51c67944173d540bb52c33e409e2cfb8d381dc5a649d02e5599384faf4caa6ff/diff:/var/lib/docker/overlay2/5a530f1cc647ab6a7e5fbe252ffbfada764bc01fee20f5f70ad2ebe08b60c7c5/diff:/var/lib/docker/overlay2/d4472d58828ae545a5beec970f632730af916c03aea959ec3ec7d64a0579b1ea/diff:/var/lib/docker/overlay2/6b823f45daca0146f21cbfbe06e22b48fd5bf7fcf086765dde5c36cc5ae90aed/diff:/var/lib/docker/overlay2/54b88f4723cfc7221b7f0789d171797ed1328bd24d62508bfa456753f3e5c2bc/diff:/var/lib/d
ocker/overlay2/44599d073f725ff40c4736e9287865ef0372f691d010db33ba7bf69574f74aca/diff:/var/lib/docker/overlay2/68defae06f1c119684bbec2cd0b360da76b8ab455d9a617b0b16ea22bd3617c5/diff:/var/lib/docker/overlay2/2dd86bf6ab6202500623423a11736ce7c2c96ebe5d83bb039f44f0d4981510b4/diff:/var/lib/docker/overlay2/335010880e7bbb7689d4210cb09578047fa8d34b6ded18dcc4d3d5a6cc4287fb/diff:/var/lib/docker/overlay2/d73ca7e5b5a047dfc79343e02709bae69f2414aaed6f2830edbd022af4e1e145/diff:/var/lib/docker/overlay2/dae580a357bf83dff3b3b546fb9cda97e6511f710c236784c68ce84657fb0337/diff:/var/lib/docker/overlay2/1842e3044746991dda288e11a2bee8a8857d749595d769968b661a0994c25215/diff:/var/lib/docker/overlay2/3fba19b5de3fbb9f62126949163b914e6dd8efdb65c12afd6e6d56214581b8a6/diff:/var/lib/docker/overlay2/6ec508232bae92f0262e74463db095e79b446d6658a903f74d6d9275dae17d55/diff:/var/lib/docker/overlay2/653b5d92bafd148a58b3febd568fb54d9ba1f3b109cac8e277d5177a216868c1/diff:/var/lib/docker/overlay2/5fb2dc662190229810bebc6d79e918be90b416edb8ee1e20e951e803195
3d813/diff:/var/lib/docker/overlay2/6484c79c5b005c0d8eef871cad9010368b5332e697cb3a01cc7cc94bfed33376/diff:/var/lib/docker/overlay2/81e5b96e2d4c2697e1c6962beb6e71da710754f42e32a941f732c4efab850973/diff:/var/lib/docker/overlay2/85036ccfe63574469e3678df6445e614574f07f77c334997fac7f3ee217f5c54/diff:/var/lib/docker/overlay2/7ff8315528872300329fdbd17f11d0ea04ab7c7778244a12bc621ae84f12cf77/diff:/var/lib/docker/overlay2/c32e188bd4ec64d8f716b7885ce228c89a3c4f2777d3e33ed448911d38ceba55/diff:/var/lib/docker/overlay2/142e8c88931b6205839c329cc5ab1f40b06e30f547860d743f6d571c95a75b91/diff:/var/lib/docker/overlay2/21f148a35621027811131428e59ec3709b661b2a56e8ebfee2a95b3cdfb407e7/diff:/var/lib/docker/overlay2/9111530a9968c33f38dab8aebccd5d93acbd8d331124b7d12a0da63f86ae5768/diff:/var/lib/docker/overlay2/59aee9dd537a039e02b73dce312bf35f6cd3d34146c96208a1461e4c82a284ca/diff:/var/lib/docker/overlay2/3e4cb9f6fecb0597fc001ef0ad000a46fd7410c70475a6e8d6fb98e6d5c4f42a/diff:/var/lib/docker/overlay2/90181e6f161e52f087dda33985e81570a08027
27ab8282224c85a24bea25782e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e956291c12b29486a27a5c39fd53afa3174f4380097809c8a71397595abaa7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e956291c12b29486a27a5c39fd53afa3174f4380097809c8a71397595abaa7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e956291c12b29486a27a5c39fd53afa3174f4380097809c8a71397595abaa7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-231829",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-231829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-231829",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-231829",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-231829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2793048c2691a57597be56bfa3763c1abc09d9a630b04bb9410aac88d24b1e2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49357"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49356"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49355"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a2793048c269",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-231829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f1f7240c914f",
	                        "kubernetes-upgrade-231829"
	                    ],
	                    "NetworkID": "ceaa2051b7489d242c213c6ca0588f6e54cdfe1f671600dddec32ecba32a6d97",
	                    "EndpointID": "70e71c280fe3d79978558c898564cd22be6b9884452e80cd5b8f217ba70194aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-231829 -n kubernetes-upgrade-231829
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-231829 -n kubernetes-upgrade-231829: exit status 2 (386.510406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-231829 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-231938 -- sudo                    | cert-options-231938          | jenkins | v1.27.1 | 01 Nov 22 23:20 UTC | 01 Nov 22 23:20 UTC |
	|         | cat /etc/kubernetes/admin.conf                    |                              |         |         |                     |                     |
	| delete  | -p cert-options-231938                            | cert-options-231938          | jenkins | v1.27.1 | 01 Nov 22 23:20 UTC | 01 Nov 22 23:20 UTC |
	| start   | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:20 UTC | 01 Nov 22 23:21 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232012        | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:21 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:21 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232012             | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:21 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:21 UTC | 01 Nov 22 23:27 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-231959   | old-k8s-version-231959       | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC | 01 Nov 22 23:22 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-231959                         | old-k8s-version-231959       | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC | 01 Nov 22 23:22 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| start   | -p cert-expiration-231852                         | cert-expiration-231852       | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC | 01 Nov 22 23:22 UTC |
	|         | --memory=2048                                     |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-231959        | old-k8s-version-231959       | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC | 01 Nov 22 23:22 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-231959                         | old-k8s-version-231959       | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-231852                         | cert-expiration-231852       | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC | 01 Nov 22 23:22 UTC |
	| start   | -p embed-certs-232234                             | embed-certs-232234           | jenkins | v1.27.1 | 01 Nov 22 23:22 UTC | 01 Nov 22 23:23 UTC |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-232234       | embed-certs-232234           | jenkins | v1.27.1 | 01 Nov 22 23:23 UTC | 01 Nov 22 23:23 UTC |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-232234                             | embed-certs-232234           | jenkins | v1.27.1 | 01 Nov 22 23:23 UTC | 01 Nov 22 23:23 UTC |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-232234            | embed-certs-232234           | jenkins | v1.27.1 | 01 Nov 22 23:23 UTC | 01 Nov 22 23:23 UTC |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-232234                             | embed-certs-232234           | jenkins | v1.27.1 | 01 Nov 22 23:23 UTC |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-232012 sudo                         | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC | 01 Nov 22 23:27 UTC |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC | 01 Nov 22 23:27 UTC |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC | 01 Nov 22 23:27 UTC |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC | 01 Nov 22 23:27 UTC |
	| delete  | -p no-preload-232012                              | no-preload-232012            | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC | 01 Nov 22 23:27 UTC |
	| delete  | -p                                                | disable-driver-mounts-232727 | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC | 01 Nov 22 23:27 UTC |
	|         | disable-driver-mounts-232727                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-232727 | jenkins | v1.27.1 | 01 Nov 22 23:27 UTC |                     |
	|         | default-k8s-diff-port-232727                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 23:27:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 23:27:27.770218  235472 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:27:27.770365  235472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:27:27.770379  235472 out.go:309] Setting ErrFile to fd 2...
	I1101 23:27:27.770386  235472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:27:27.770510  235472 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:27:27.771080  235472 out.go:303] Setting JSON to false
	I1101 23:27:27.773021  235472 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4194,"bootTime":1667341054,"procs":1100,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 23:27:27.773086  235472 start.go:126] virtualization: kvm guest
	I1101 23:27:27.775885  235472 out.go:177] * [default-k8s-diff-port-232727] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 23:27:27.777493  235472 notify.go:220] Checking for updates...
	I1101 23:27:27.778982  235472 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 23:27:27.780534  235472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 23:27:27.782457  235472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:27:27.784087  235472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 23:27:27.785545  235472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 23:27:27.787231  235472 config.go:180] Loaded profile config "embed-certs-232234": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:27:27.787332  235472 config.go:180] Loaded profile config "kubernetes-upgrade-231829": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:27:27.787482  235472 config.go:180] Loaded profile config "old-k8s-version-231959": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1101 23:27:27.787529  235472 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 23:27:27.819086  235472 docker.go:137] docker version: linux-20.10.21
	I1101 23:27:27.819176  235472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:27:27.923024  235472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-11-01 23:27:27.83952843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:27:27.923167  235472 docker.go:254] overlay module found
	I1101 23:27:27.925361  235472 out.go:177] * Using the docker driver based on user configuration
	I1101 23:27:27.926842  235472 start.go:282] selected driver: docker
	I1101 23:27:27.926864  235472 start.go:808] validating driver "docker" against <nil>
	I1101 23:27:27.926888  235472 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 23:27:27.927846  235472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:27:28.027740  235472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-11-01 23:27:27.948598489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:27:28.027881  235472 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 23:27:28.028038  235472 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 23:27:28.030412  235472 out.go:177] * Using Docker driver with root privileges
	I1101 23:27:28.031909  235472 cni.go:95] Creating CNI manager for ""
	I1101 23:27:28.031934  235472 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:27:28.031950  235472 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 23:27:28.031959  235472 start_flags.go:317] config:
	{Name:default-k8s-diff-port-232727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-232727 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:27:28.033902  235472 out.go:177] * Starting control plane node default-k8s-diff-port-232727 in cluster default-k8s-diff-port-232727
	I1101 23:27:28.035350  235472 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 23:27:28.036687  235472 out.go:177] * Pulling base image ...
	I1101 23:27:24.297100  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:26.298193  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:28.038060  235472 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:27:28.038102  235472 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1101 23:27:28.038123  235472 cache.go:57] Caching tarball of preloaded images
	I1101 23:27:28.038146  235472 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 23:27:28.038376  235472 preload.go:174] Found /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 23:27:28.038394  235472 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1101 23:27:28.038515  235472 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/config.json ...
	I1101 23:27:28.038547  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/config.json: {Name:mk38cd3ad30a1d5f42eb1fa2417528d152d8d591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:28.062842  235472 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 23:27:28.062870  235472 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 23:27:28.062883  235472 cache.go:208] Successfully downloaded all kic artifacts
	I1101 23:27:28.062916  235472 start.go:364] acquiring machines lock for default-k8s-diff-port-232727: {Name:mk462d74356becb8b8e5b5847815e44b0cf313f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 23:27:28.063052  235472 start.go:368] acquired machines lock for "default-k8s-diff-port-232727" in 113.369µs
	I1101 23:27:28.063082  235472 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-232727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-232727 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 23:27:28.063172  235472 start.go:125] createHost starting for "" (driver="docker")
	I1101 23:27:28.147884  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:27:28.148138  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:27:27.471740  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:29.970724  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:28.066488  235472 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1101 23:27:28.066734  235472 start.go:159] libmachine.API.Create for "default-k8s-diff-port-232727" (driver="docker")
	I1101 23:27:28.066768  235472 client.go:168] LocalClient.Create starting
	I1101 23:27:28.066826  235472 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem
	I1101 23:27:28.066859  235472 main.go:134] libmachine: Decoding PEM data...
	I1101 23:27:28.066878  235472 main.go:134] libmachine: Parsing certificate...
	I1101 23:27:28.066923  235472 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem
	I1101 23:27:28.066941  235472 main.go:134] libmachine: Decoding PEM data...
	I1101 23:27:28.066950  235472 main.go:134] libmachine: Parsing certificate...
	I1101 23:27:28.067248  235472 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-232727 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 23:27:28.090221  235472 cli_runner.go:211] docker network inspect default-k8s-diff-port-232727 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 23:27:28.090320  235472 network_create.go:272] running [docker network inspect default-k8s-diff-port-232727] to gather additional debugging logs...
	I1101 23:27:28.090339  235472 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-232727
	W1101 23:27:28.114190  235472 cli_runner.go:211] docker network inspect default-k8s-diff-port-232727 returned with exit code 1
	I1101 23:27:28.114228  235472 network_create.go:275] error running [docker network inspect default-k8s-diff-port-232727]: docker network inspect default-k8s-diff-port-232727: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-232727
	I1101 23:27:28.114246  235472 network_create.go:277] output of [docker network inspect default-k8s-diff-port-232727]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-232727
	
	** /stderr **
	I1101 23:27:28.114329  235472 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:27:28.140717  235472 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b9c8e174cce4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d2:38:dc:bb}}
	I1101 23:27:28.141772  235472 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-fc1228290d01 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e4:a2:b0:46}}
	I1101 23:27:28.142865  235472 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-f77e7967cd66 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:08:b0:75:71}}
	I1101 23:27:28.143825  235472 network.go:246] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-ceaa2051b748 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:49:e6:09:f7}}
	I1101 23:27:28.144534  235472 network.go:246] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-1f8e2eecaf26 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:e2:86:12:fa}}
	I1101 23:27:28.145955  235472 network.go:295] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc000590568] misses:0}
	I1101 23:27:28.145999  235472 network.go:241] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 23:27:28.146013  235472 network_create.go:115] attempt to create docker network default-k8s-diff-port-232727 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1101 23:27:28.146074  235472 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-232727 default-k8s-diff-port-232727
	I1101 23:27:28.207881  235472 network_create.go:99] docker network default-k8s-diff-port-232727 192.168.94.0/24 created
	I1101 23:27:28.207911  235472 kic.go:106] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-232727" container
	I1101 23:27:28.207980  235472 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 23:27:28.231800  235472 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-232727 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-232727 --label created_by.minikube.sigs.k8s.io=true
	I1101 23:27:28.255414  235472 oci.go:103] Successfully created a docker volume default-k8s-diff-port-232727
	I1101 23:27:28.255493  235472 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-232727-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-232727 --entrypoint /usr/bin/test -v default-k8s-diff-port-232727:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1101 23:27:28.848185  235472 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-232727
	I1101 23:27:28.848217  235472 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:27:28.848237  235472 kic.go:179] Starting extracting preloaded images to volume ...
	I1101 23:27:28.848286  235472 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-232727:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 23:27:28.795303  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:30.796252  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:31.970991  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:33.971175  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:34.848579  235472 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-232727:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (6.000217274s)
	I1101 23:27:34.848626  235472 kic.go:188] duration metric: took 6.000376 seconds to extract preloaded images to volume
	W1101 23:27:34.848777  235472 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 23:27:34.848894  235472 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 23:27:34.950461  235472 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-232727 --name default-k8s-diff-port-232727 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-232727 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-232727 --network default-k8s-diff-port-232727 --ip 192.168.94.2 --volume default-k8s-diff-port-232727:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1101 23:27:35.342294  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Running}}
	I1101 23:27:35.371145  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Status}}
	I1101 23:27:35.396527  235472 cli_runner.go:164] Run: docker exec default-k8s-diff-port-232727 stat /var/lib/dpkg/alternatives/iptables
	I1101 23:27:35.448983  235472 oci.go:144] the created container "default-k8s-diff-port-232727" has a running status.
	I1101 23:27:35.449022  235472 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa...
	I1101 23:27:35.702326  235472 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 23:27:35.781395  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Status}}
	I1101 23:27:35.811919  235472 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 23:27:35.811947  235472 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-232727 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 23:27:35.899233  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Status}}
	I1101 23:27:35.927247  235472 machine.go:88] provisioning docker machine ...
	I1101 23:27:35.927306  235472 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-232727"
	I1101 23:27:35.927364  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:35.953332  235472 main.go:134] libmachine: Using SSH client type: native
	I1101 23:27:35.953517  235472 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49398 <nil> <nil>}
	I1101 23:27:35.953538  235472 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-232727 && echo "default-k8s-diff-port-232727" | sudo tee /etc/hostname
	I1101 23:27:36.080035  235472 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-232727
	
	I1101 23:27:36.080103  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:36.104469  235472 main.go:134] libmachine: Using SSH client type: native
	I1101 23:27:36.104635  235472 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49398 <nil> <nil>}
	I1101 23:27:36.104668  235472 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-232727' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-232727/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-232727' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 23:27:36.219224  235472 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 23:27:36.219264  235472 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
	I1101 23:27:36.219288  235472 ubuntu.go:177] setting up certificates
	I1101 23:27:36.219296  235472 provision.go:83] configureAuth start
	I1101 23:27:36.219341  235472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-232727
	I1101 23:27:36.243792  235472 provision.go:138] copyHostCerts
	I1101 23:27:36.243850  235472 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
	I1101 23:27:36.243859  235472 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
	I1101 23:27:36.243918  235472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
	I1101 23:27:36.243989  235472 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
	I1101 23:27:36.243997  235472 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
	I1101 23:27:36.244022  235472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
	I1101 23:27:36.244074  235472 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
	I1101 23:27:36.244082  235472 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
	I1101 23:27:36.244104  235472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
	I1101 23:27:36.244145  235472 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-232727 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-232727]
	I1101 23:27:36.410017  235472 provision.go:172] copyRemoteCerts
	I1101 23:27:36.410077  235472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 23:27:36.410116  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:36.434962  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:27:36.518776  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 23:27:36.535812  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 23:27:36.552259  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 23:27:36.568486  235472 provision.go:86] duration metric: configureAuth took 349.179497ms
	I1101 23:27:36.568513  235472 ubuntu.go:193] setting minikube options for container-runtime
	I1101 23:27:36.568656  235472 config.go:180] Loaded profile config "default-k8s-diff-port-232727": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:27:36.568667  235472 machine.go:91] provisioned docker machine in 641.392074ms
	I1101 23:27:36.568673  235472 client.go:171] LocalClient.Create took 8.501901041s
	I1101 23:27:36.568690  235472 start.go:167] duration metric: libmachine.API.Create for "default-k8s-diff-port-232727" took 8.501956264s
	I1101 23:27:36.568699  235472 start.go:300] post-start starting for "default-k8s-diff-port-232727" (driver="docker")
	I1101 23:27:36.568705  235472 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 23:27:36.568736  235472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 23:27:36.568770  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:36.594279  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:27:36.682834  235472 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 23:27:36.685443  235472 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 23:27:36.685474  235472 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 23:27:36.685489  235472 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 23:27:36.685495  235472 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 23:27:36.685503  235472 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
	I1101 23:27:36.685551  235472 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
	I1101 23:27:36.685611  235472 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
	I1101 23:27:36.685683  235472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 23:27:36.692226  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:27:36.708852  235472 start.go:303] post-start completed in 140.141354ms
	I1101 23:27:36.709194  235472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-232727
	I1101 23:27:36.733956  235472 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/config.json ...
	I1101 23:27:36.734162  235472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:27:36.734197  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:36.757904  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:27:36.839714  235472 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 23:27:36.843492  235472 start.go:128] duration metric: createHost completed in 8.780308436s
	I1101 23:27:36.843515  235472 start.go:83] releasing machines lock for "default-k8s-diff-port-232727", held for 8.780448791s
	I1101 23:27:36.843593  235472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-232727
	I1101 23:27:36.867721  235472 ssh_runner.go:195] Run: systemctl --version
	I1101 23:27:36.867774  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:36.867794  235472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 23:27:36.867881  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:27:36.895142  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:27:36.896917  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:27:36.979445  235472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 23:27:37.012501  235472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 23:27:37.021670  235472 docker.go:189] disabling docker service ...
	I1101 23:27:37.021728  235472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 23:27:37.037360  235472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 23:27:37.046371  235472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 23:27:37.135951  235472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 23:27:37.220192  235472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 23:27:37.229648  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 23:27:37.242045  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1101 23:27:37.249536  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1101 23:27:37.256906  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1101 23:27:37.264396  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I1101 23:27:37.272064  235472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 23:27:37.278041  235472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 23:27:37.284124  235472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:27:37.362708  235472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:27:37.429682  235472 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 23:27:37.429750  235472 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 23:27:37.433639  235472 start.go:472] Will wait 60s for crictl version
	I1101 23:27:37.433688  235472 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:27:37.458978  235472 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1101 23:27:37.459027  235472 ssh_runner.go:195] Run: containerd --version
	I1101 23:27:37.483724  235472 ssh_runner.go:195] Run: containerd --version
	I1101 23:27:37.511133  235472 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1101 23:27:37.512789  235472 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-232727 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:27:37.537437  235472 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 23:27:37.540659  235472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 23:27:37.549927  235472 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:27:37.549979  235472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:27:37.572875  235472 containerd.go:553] all images are preloaded for containerd runtime.
	I1101 23:27:37.572902  235472 containerd.go:467] Images already preloaded, skipping extraction
	I1101 23:27:37.572955  235472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:27:37.595393  235472 containerd.go:553] all images are preloaded for containerd runtime.
	I1101 23:27:37.595444  235472 cache_images.go:84] Images are preloaded, skipping loading
	I1101 23:27:37.595482  235472 ssh_runner.go:195] Run: sudo crictl info
	I1101 23:27:37.620021  235472 cni.go:95] Creating CNI manager for ""
	I1101 23:27:37.620051  235472 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:27:37.620062  235472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 23:27:37.620073  235472 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-232727 NodeName:default-k8s-diff-port-232727 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 23:27:37.620213  235472 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-diff-port-232727"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 23:27:37.620293  235472 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-diff-port-232727 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-232727 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1101 23:27:37.620343  235472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 23:27:37.627241  235472 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 23:27:37.627292  235472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 23:27:37.633921  235472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (521 bytes)
	I1101 23:27:37.645832  235472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 23:27:37.657941  235472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I1101 23:27:37.669884  235472 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 23:27:37.672530  235472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 23:27:37.681084  235472 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727 for IP: 192.168.94.2
	I1101 23:27:37.681199  235472 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
	I1101 23:27:37.681252  235472 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
	I1101 23:27:37.681314  235472 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/client.key
	I1101 23:27:37.681333  235472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/client.crt with IP's: []
	I1101 23:27:33.295365  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:35.796376  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:37.898083  235472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/client.crt ...
	I1101 23:27:37.898109  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/client.crt: {Name:mk23819b21e31fd53d8e64ed5b45f47dd9b01c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:37.898313  235472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/client.key ...
	I1101 23:27:37.898331  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/client.key: {Name:mke2ab3a030499ca05f5afba8d8026f366272dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:37.898459  235472 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.key.ad8e880a
	I1101 23:27:37.898484  235472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 23:27:38.071484  235472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.crt.ad8e880a ...
	I1101 23:27:38.071511  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.crt.ad8e880a: {Name:mk7559db243ccbdc1ef299fed84b171b49e379b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:38.071686  235472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.key.ad8e880a ...
	I1101 23:27:38.071702  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.key.ad8e880a: {Name:mkf95cb2e5d9ff227dd17ca75a288a2e0688777f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:38.071788  235472 certs.go:320] copying /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.crt
	I1101 23:27:38.071846  235472 certs.go:324] copying /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.key
	I1101 23:27:38.071890  235472 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.key
	I1101 23:27:38.071905  235472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.crt with IP's: []
	I1101 23:27:38.388251  235472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.crt ...
	I1101 23:27:38.388286  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.crt: {Name:mkf544833fbde86ed1dfee10defb1b7e2a347291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:38.388479  235472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.key ...
	I1101 23:27:38.388493  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.key: {Name:mkd483ebcfa90e5645dd0a0fe46df917955170a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:27:38.388674  235472 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
	W1101 23:27:38.388711  235472 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
	I1101 23:27:38.388724  235472 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 23:27:38.388756  235472 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
	I1101 23:27:38.388780  235472 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
	I1101 23:27:38.388800  235472 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
	I1101 23:27:38.388836  235472 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:27:38.389331  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 23:27:38.407413  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 23:27:38.425199  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 23:27:38.442044  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/default-k8s-diff-port-232727/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 23:27:38.459078  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 23:27:38.476384  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 23:27:38.492985  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 23:27:38.509354  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 23:27:38.525896  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 23:27:38.542023  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
	I1101 23:27:38.558236  235472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
	I1101 23:27:38.574604  235472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 23:27:38.586248  235472 ssh_runner.go:195] Run: openssl version
	I1101 23:27:38.591041  235472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
	I1101 23:27:38.598187  235472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
	I1101 23:27:38.601384  235472 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:50 /usr/share/ca-certificates/128402.pem
	I1101 23:27:38.601433  235472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
	I1101 23:27:38.605971  235472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 23:27:38.613273  235472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 23:27:38.620903  235472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:27:38.623964  235472 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:27:38.624018  235472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:27:38.629030  235472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 23:27:38.636408  235472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
	I1101 23:27:38.643534  235472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
	I1101 23:27:38.646495  235472 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:50 /usr/share/ca-certificates/12840.pem
	I1101 23:27:38.646540  235472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
	I1101 23:27:38.651030  235472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
	I1101 23:27:38.658035  235472 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-232727 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-232727 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:27:38.658108  235472 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 23:27:38.658136  235472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:27:38.681816  235472 cri.go:87] found id: ""
	I1101 23:27:38.681867  235472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 23:27:38.688502  235472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:27:38.694917  235472 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:27:38.694964  235472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:27:38.701581  235472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:27:38.701624  235472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:27:38.742642  235472 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1101 23:27:38.742710  235472 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:27:38.769534  235472 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:27:38.769650  235472 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:27:38.769723  235472 kubeadm.go:317] OS: Linux
	I1101 23:27:38.769801  235472 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:27:38.769878  235472 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:27:38.769942  235472 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:27:38.770059  235472 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:27:38.770139  235472 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:27:38.770226  235472 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:27:38.770292  235472 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:27:38.770353  235472 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:27:38.770412  235472 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:27:38.837000  235472 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 23:27:38.837212  235472 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 23:27:38.837357  235472 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 23:27:38.951121  235472 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 23:27:35.971257  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:38.470509  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:38.954971  235472 out.go:204]   - Generating certificates and keys ...
	I1101 23:27:38.955110  235472 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 23:27:38.955223  235472 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 23:27:39.030231  235472 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 23:27:39.145451  235472 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1101 23:27:39.285694  235472 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1101 23:27:39.546327  235472 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1101 23:27:39.808687  235472 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1101 23:27:39.808908  235472 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-232727 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 23:27:39.999092  235472 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1101 23:27:39.999318  235472 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-232727 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 23:27:40.205069  235472 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 23:27:40.334121  235472 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 23:27:40.649848  235472 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1101 23:27:40.649994  235472 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 23:27:40.729520  235472 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 23:27:41.070937  235472 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 23:27:41.225745  235472 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 23:27:41.430603  235472 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 23:27:41.443574  235472 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 23:27:41.444447  235472 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 23:27:41.444508  235472 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1101 23:27:41.522385  235472 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 23:27:41.525955  235472 out.go:204]   - Booting up control plane ...
	I1101 23:27:41.526140  235472 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 23:27:41.526251  235472 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 23:27:41.526985  235472 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 23:27:41.527734  235472 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 23:27:41.529601  235472 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 23:27:38.295448  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:40.795032  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:40.470722  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:42.471153  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:44.471615  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:47.532530  235472 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002874 seconds
	I1101 23:27:47.532718  235472 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 23:27:47.543432  235472 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 23:27:43.295628  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:45.795758  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:48.061064  235472 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 23:27:48.061250  235472 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-diff-port-232727 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 23:27:48.568448  235472 kubeadm.go:317] [bootstrap-token] Using token: lzqet4.0wyuo607wgyhvpfw
	I1101 23:27:48.570186  235472 out.go:204]   - Configuring RBAC rules ...
	I1101 23:27:48.570329  235472 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 23:27:48.572917  235472 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 23:27:48.577633  235472 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 23:27:48.579580  235472 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 23:27:48.581567  235472 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 23:27:48.583484  235472 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 23:27:48.590262  235472 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 23:27:48.803231  235472 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1101 23:27:49.015226  235472 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1101 23:27:49.016518  235472 kubeadm.go:317] 
	I1101 23:27:49.016611  235472 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1101 23:27:49.016624  235472 kubeadm.go:317] 
	I1101 23:27:49.016729  235472 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1101 23:27:49.016741  235472 kubeadm.go:317] 
	I1101 23:27:49.016801  235472 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1101 23:27:49.016900  235472 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 23:27:49.016969  235472 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 23:27:49.017005  235472 kubeadm.go:317] 
	I1101 23:27:49.017079  235472 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1101 23:27:49.017094  235472 kubeadm.go:317] 
	I1101 23:27:49.017156  235472 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 23:27:49.017169  235472 kubeadm.go:317] 
	I1101 23:27:49.017233  235472 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1101 23:27:49.017335  235472 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 23:27:49.017423  235472 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 23:27:49.017437  235472 kubeadm.go:317] 
	I1101 23:27:49.017542  235472 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 23:27:49.017637  235472 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1101 23:27:49.017653  235472 kubeadm.go:317] 
	I1101 23:27:49.017768  235472 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token lzqet4.0wyuo607wgyhvpfw \
	I1101 23:27:49.017900  235472 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:035b63f088f323cab437251192a32166cf4377fef2aef8dc417cb1e55982412e \
	I1101 23:27:49.017932  235472 kubeadm.go:317] 	--control-plane 
	I1101 23:27:49.017943  235472 kubeadm.go:317] 
	I1101 23:27:49.018040  235472 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1101 23:27:49.018057  235472 kubeadm.go:317] 
	I1101 23:27:49.018154  235472 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token lzqet4.0wyuo607wgyhvpfw \
	I1101 23:27:49.018276  235472 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:035b63f088f323cab437251192a32166cf4377fef2aef8dc417cb1e55982412e 
	I1101 23:27:49.020456  235472 kubeadm.go:317] W1101 23:27:38.735246     741 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:27:49.020822  235472 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:27:49.020984  235472 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:27:49.021000  235472 cni.go:95] Creating CNI manager for ""
	I1101 23:27:49.021010  235472 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 23:27:49.024205  235472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 23:27:46.971124  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:49.470144  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:49.025944  235472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 23:27:49.030240  235472 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1101 23:27:49.030262  235472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1101 23:27:49.044551  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 23:27:49.783075  235472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 23:27:49.783140  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:49.783160  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=65bfd3dc2bf9824cf305579b01895f56b2ba9210 minikube.k8s.io/name=default-k8s-diff-port-232727 minikube.k8s.io/updated_at=2022_11_01T23_27_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:49.869888  235472 ops.go:34] apiserver oom_adj: -16
	I1101 23:27:49.870052  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:50.466578  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:50.966913  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:51.466673  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:51.967030  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:52.466056  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:48.294998  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:50.295940  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:52.795679  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:51.472948  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:53.970610  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:52.967074  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:53.466236  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:53.966884  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:54.466199  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:54.966613  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:55.466706  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:55.966063  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:56.466799  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:56.966189  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:57.466881  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:55.295223  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:57.794691  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:55.970916  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:58.470657  209671 pod_ready.go:102] pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace has status "Ready":"False"
	I1101 23:27:58.966737  209671 pod_ready.go:81] duration metric: took 4m0.400601341s waiting for pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace to be "Ready" ...
	E1101 23:27:58.966763  209671 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7958775c-6hl8l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1101 23:27:58.966780  209671 pod_ready.go:38] duration metric: took 4m1.599778709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:27:58.966807  209671 kubeadm.go:631] restartCluster took 5m11.607730727s
	W1101 23:27:58.966950  209671 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1101 23:27:58.966995  209671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1101 23:28:01.309450  209671 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.342432182s)
	I1101 23:28:01.309524  209671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:28:01.319258  209671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:28:01.326168  209671 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:28:01.326220  209671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:28:01.333065  209671 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:28:01.333107  209671 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:28:01.379907  209671 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1101 23:28:01.379977  209671 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:28:01.407173  209671 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:28:01.407264  209671 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:28:01.407315  209671 kubeadm.go:317] OS: Linux
	I1101 23:28:01.407425  209671 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:28:01.407479  209671 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:28:01.407555  209671 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:28:01.407635  209671 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:28:01.407712  209671 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:28:01.407757  209671 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:28:01.474259  209671 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 23:28:01.474402  209671 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 23:28:01.474544  209671 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 23:28:01.607836  209671 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 23:28:01.608895  209671 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 23:28:01.615706  209671 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1101 23:28:01.685304  209671 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 23:27:57.966456  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:58.466702  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:58.966753  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:59.466271  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:27:59.966107  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:28:00.466176  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:28:00.966804  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:28:01.466773  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:28:01.966893  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:28:02.466420  235472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:28:02.546562  235472 kubeadm.go:1067] duration metric: took 12.763471483s to wait for elevateKubeSystemPrivileges.
	I1101 23:28:02.546604  235472 kubeadm.go:398] StartCluster complete in 23.888574713s
	I1101 23:28:02.546637  235472 settings.go:142] acquiring lock: {Name:mk15316af474a840de6d06c1a5891b6bc5e64510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:28:02.546756  235472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:28:02.548752  235472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/kubeconfig: {Name:mk05c0f2e138ac359064389ca5eb4fadba1c406f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:28:01.691987  209671 out.go:204]   - Generating certificates and keys ...
	I1101 23:28:01.692111  209671 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 23:28:01.692213  209671 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 23:28:01.692326  209671 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 23:28:01.692413  209671 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1101 23:28:01.692518  209671 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 23:28:01.692574  209671 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1101 23:28:01.692627  209671 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1101 23:28:01.692689  209671 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1101 23:28:01.692767  209671 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 23:28:01.692854  209671 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 23:28:01.692888  209671 kubeadm.go:317] [certs] Using the existing "sa" key
	I1101 23:28:01.692935  209671 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 23:28:01.954521  209671 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 23:28:02.083870  209671 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 23:28:02.513403  209671 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 23:28:02.932331  209671 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 23:28:02.933117  209671 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 23:27:59.796654  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:28:02.295924  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:28:03.067447  235472 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-diff-port-232727" rescaled to 1
	I1101 23:28:03.067502  235472 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 23:28:03.069207  235472 out.go:177] * Verifying Kubernetes components...
	I1101 23:28:03.067561  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 23:28:03.067569  235472 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1101 23:28:03.067726  235472 config.go:180] Loaded profile config "default-k8s-diff-port-232727": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:28:03.070832  235472 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-232727"
	I1101 23:28:03.070846  235472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:28:03.070855  235472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-232727"
	I1101 23:28:03.070832  235472 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-232727"
	I1101 23:28:03.070922  235472 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-diff-port-232727"
	W1101 23:28:03.070933  235472 addons.go:162] addon storage-provisioner should already be in state true
	I1101 23:28:03.070972  235472 host.go:66] Checking if "default-k8s-diff-port-232727" exists ...
	I1101 23:28:03.071139  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Status}}
	I1101 23:28:03.071407  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Status}}
	I1101 23:28:03.083314  235472 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-232727" to be "Ready" ...
	I1101 23:28:03.102600  235472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:28:03.104281  235472 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 23:28:03.104310  235472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 23:28:03.104363  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:28:03.105215  235472 addons.go:153] Setting addon default-storageclass=true in "default-k8s-diff-port-232727"
	W1101 23:28:03.105239  235472 addons.go:162] addon default-storageclass should already be in state true
	I1101 23:28:03.105270  235472 host.go:66] Checking if "default-k8s-diff-port-232727" exists ...
	I1101 23:28:03.105667  235472 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-232727 --format={{.State.Status}}
	I1101 23:28:03.137352  235472 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 23:28:03.137377  235472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 23:28:03.137431  235472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-232727
	I1101 23:28:03.137939  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:28:03.154797  235472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 23:28:03.174529  235472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/default-k8s-diff-port-232727/id_rsa Username:docker}
	I1101 23:28:03.238661  235472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 23:28:03.429259  235472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 23:28:03.723720  235472 start.go:826] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I1101 23:28:03.870465  235472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 23:28:02.936239  209671 out.go:204]   - Booting up control plane ...
	I1101 23:28:02.936369  209671 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 23:28:02.939699  209671 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 23:28:02.940785  209671 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 23:28:02.941635  209671 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 23:28:02.944275  209671 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 23:28:03.871898  235472 addons.go:414] enableAddons completed in 804.342417ms
	I1101 23:28:05.090480  235472 node_ready.go:58] node "default-k8s-diff-port-232727" has status "Ready":"False"
	I1101 23:28:07.589844  235472 node_ready.go:58] node "default-k8s-diff-port-232727" has status "Ready":"False"
	I1101 23:28:04.795082  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:28:06.795229  219690 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-l9xfh" in "kube-system" namespace has status "Ready":"False"
	I1101 23:28:08.148535  185407 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1101 23:28:08.148824  185407 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1101 23:28:08.148865  185407 kubeadm.go:317] 
	I1101 23:28:08.148928  185407 kubeadm.go:317] Unfortunately, an error has occurred:
	I1101 23:28:08.148993  185407 kubeadm.go:317] 	timed out waiting for the condition
	I1101 23:28:08.149006  185407 kubeadm.go:317] 
	I1101 23:28:08.149056  185407 kubeadm.go:317] This error is likely caused by:
	I1101 23:28:08.149127  185407 kubeadm.go:317] 	- The kubelet is not running
	I1101 23:28:08.149289  185407 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1101 23:28:08.149303  185407 kubeadm.go:317] 
	I1101 23:28:08.149423  185407 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1101 23:28:08.149474  185407 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1101 23:28:08.149511  185407 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1101 23:28:08.149521  185407 kubeadm.go:317] 
	I1101 23:28:08.149672  185407 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1101 23:28:08.149791  185407 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1101 23:28:08.149924  185407 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1101 23:28:08.150106  185407 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I1101 23:28:08.150231  185407 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1101 23:28:08.150376  185407 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I1101 23:28:08.151585  185407 kubeadm.go:317] W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:28:08.151824  185407 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:28:08.151960  185407 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:28:08.152103  185407 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1101 23:28:08.152169  185407 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1101 23:28:08.152244  185407 kubeadm.go:398] StartCluster complete in 8m5.994430898s
	I1101 23:28:08.152280  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1101 23:28:08.152328  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 23:28:08.176225  185407 cri.go:87] found id: ""
	I1101 23:28:08.176251  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.176260  185407 logs.go:276] No container was found matching "kube-apiserver"
	I1101 23:28:08.176270  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1101 23:28:08.176327  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 23:28:08.199234  185407 cri.go:87] found id: ""
	I1101 23:28:08.199258  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.199266  185407 logs.go:276] No container was found matching "etcd"
	I1101 23:28:08.199274  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1101 23:28:08.199322  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 23:28:08.221154  185407 cri.go:87] found id: ""
	I1101 23:28:08.221176  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.221183  185407 logs.go:276] No container was found matching "coredns"
	I1101 23:28:08.221188  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1101 23:28:08.221230  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 23:28:08.243882  185407 cri.go:87] found id: ""
	I1101 23:28:08.243906  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.243914  185407 logs.go:276] No container was found matching "kube-scheduler"
	I1101 23:28:08.243920  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1101 23:28:08.243966  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 23:28:08.265488  185407 cri.go:87] found id: ""
	I1101 23:28:08.265514  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.265520  185407 logs.go:276] No container was found matching "kube-proxy"
	I1101 23:28:08.265526  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1101 23:28:08.265563  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1101 23:28:08.290043  185407 cri.go:87] found id: ""
	I1101 23:28:08.290075  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.290084  185407 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1101 23:28:08.290092  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1101 23:28:08.290143  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 23:28:08.317739  185407 cri.go:87] found id: ""
	I1101 23:28:08.317770  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.317780  185407 logs.go:276] No container was found matching "storage-provisioner"
	I1101 23:28:08.317789  185407 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 23:28:08.317844  185407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 23:28:08.345570  185407 cri.go:87] found id: ""
	I1101 23:28:08.345602  185407 logs.go:274] 0 containers: []
	W1101 23:28:08.345612  185407 logs.go:276] No container was found matching "kube-controller-manager"
	I1101 23:28:08.345623  185407 logs.go:123] Gathering logs for kubelet ...
	I1101 23:28:08.345637  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1101 23:28:08.367361  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:18 kubernetes-upgrade-231829 kubelet[12586]: E1101 23:27:18.390338   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.367827  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12597]: E1101 23:27:19.139874   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.368328  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12609]: E1101 23:27:19.896066   12609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.368905  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:20 kubernetes-upgrade-231829 kubelet[12620]: E1101 23:27:20.636659   12620 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.369473  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:21 kubernetes-upgrade-231829 kubelet[12631]: E1101 23:27:21.400605   12631 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.369963  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:22 kubernetes-upgrade-231829 kubelet[12642]: E1101 23:27:22.138520   12642 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.370402  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:22 kubernetes-upgrade-231829 kubelet[12653]: E1101 23:27:22.897498   12653 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.370772  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:23 kubernetes-upgrade-231829 kubelet[12663]: E1101 23:27:23.643138   12663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.371249  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:24 kubernetes-upgrade-231829 kubelet[12673]: E1101 23:27:24.396966   12673 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.371861  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:25 kubernetes-upgrade-231829 kubelet[12684]: E1101 23:27:25.154324   12684 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.372373  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:25 kubernetes-upgrade-231829 kubelet[12695]: E1101 23:27:25.891181   12695 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.372883  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:26 kubernetes-upgrade-231829 kubelet[12706]: E1101 23:27:26.641044   12706 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.373477  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:27 kubernetes-upgrade-231829 kubelet[12717]: E1101 23:27:27.394650   12717 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.373996  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:28 kubernetes-upgrade-231829 kubelet[12727]: E1101 23:27:28.143993   12727 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.374535  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:28 kubernetes-upgrade-231829 kubelet[12737]: E1101 23:27:28.901435   12737 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.374984  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:29 kubernetes-upgrade-231829 kubelet[12747]: E1101 23:27:29.650465   12747 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.375603  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:30 kubernetes-upgrade-231829 kubelet[12758]: E1101 23:27:30.402926   12758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.376122  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:31 kubernetes-upgrade-231829 kubelet[12769]: E1101 23:27:31.145486   12769 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.376740  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:31 kubernetes-upgrade-231829 kubelet[12780]: E1101 23:27:31.891079   12780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.377279  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:32 kubernetes-upgrade-231829 kubelet[12791]: E1101 23:27:32.639605   12791 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.377873  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:33 kubernetes-upgrade-231829 kubelet[12802]: E1101 23:27:33.392107   12802 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.378310  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:34 kubernetes-upgrade-231829 kubelet[12813]: E1101 23:27:34.138870   12813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.378752  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:34 kubernetes-upgrade-231829 kubelet[12824]: E1101 23:27:34.893846   12824 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.379302  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:35 kubernetes-upgrade-231829 kubelet[12835]: E1101 23:27:35.651312   12835 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.379774  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:36 kubernetes-upgrade-231829 kubelet[12845]: E1101 23:27:36.391392   12845 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.380173  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:37 kubernetes-upgrade-231829 kubelet[12856]: E1101 23:27:37.143847   12856 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.380643  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:37 kubernetes-upgrade-231829 kubelet[12867]: E1101 23:27:37.889713   12867 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.381061  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:38 kubernetes-upgrade-231829 kubelet[12878]: E1101 23:27:38.641845   12878 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.381528  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:39 kubernetes-upgrade-231829 kubelet[12889]: E1101 23:27:39.394820   12889 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.382097  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:40 kubernetes-upgrade-231829 kubelet[12900]: E1101 23:27:40.145508   12900 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.382534  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:40 kubernetes-upgrade-231829 kubelet[12911]: E1101 23:27:40.890940   12911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.382956  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:41 kubernetes-upgrade-231829 kubelet[12922]: E1101 23:27:41.640999   12922 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.383572  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:42 kubernetes-upgrade-231829 kubelet[12935]: E1101 23:27:42.391366   12935 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.384065  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:43 kubernetes-upgrade-231829 kubelet[12946]: E1101 23:27:43.151607   12946 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.384598  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:43 kubernetes-upgrade-231829 kubelet[12956]: E1101 23:27:43.890804   12956 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.385231  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:44 kubernetes-upgrade-231829 kubelet[12966]: E1101 23:27:44.650554   12966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.385855  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:45 kubernetes-upgrade-231829 kubelet[12977]: E1101 23:27:45.391536   12977 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.386299  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:46 kubernetes-upgrade-231829 kubelet[12988]: E1101 23:27:46.161335   12988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.386693  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:46 kubernetes-upgrade-231829 kubelet[12998]: E1101 23:27:46.888241   12998 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.387237  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:47 kubernetes-upgrade-231829 kubelet[13009]: E1101 23:27:47.638878   13009 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.387850  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:48 kubernetes-upgrade-231829 kubelet[13020]: E1101 23:27:48.394035   13020 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.388460  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:49 kubernetes-upgrade-231829 kubelet[13031]: E1101 23:27:49.143774   13031 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.388919  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:49 kubernetes-upgrade-231829 kubelet[13041]: E1101 23:27:49.896404   13041 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.389487  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:50 kubernetes-upgrade-231829 kubelet[13052]: E1101 23:27:50.639330   13052 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.390042  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:51 kubernetes-upgrade-231829 kubelet[13063]: E1101 23:27:51.392211   13063 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.390652  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:52 kubernetes-upgrade-231829 kubelet[13074]: E1101 23:27:52.139809   13074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.391164  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:52 kubernetes-upgrade-231829 kubelet[13085]: E1101 23:27:52.890814   13085 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.391637  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:53 kubernetes-upgrade-231829 kubelet[13096]: E1101 23:27:53.638640   13096 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.392064  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:54 kubernetes-upgrade-231829 kubelet[13108]: E1101 23:27:54.392585   13108 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.392544  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:55 kubernetes-upgrade-231829 kubelet[13119]: E1101 23:27:55.141081   13119 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.393148  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:55 kubernetes-upgrade-231829 kubelet[13130]: E1101 23:27:55.890164   13130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.393703  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:56 kubernetes-upgrade-231829 kubelet[13141]: E1101 23:27:56.639793   13141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.394140  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:57 kubernetes-upgrade-231829 kubelet[13152]: E1101 23:27:57.392278   13152 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.394646  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:58 kubernetes-upgrade-231829 kubelet[13163]: E1101 23:27:58.140104   13163 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.395213  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:58 kubernetes-upgrade-231829 kubelet[13174]: E1101 23:27:58.889267   13174 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.395792  185407 logs.go:138] Found kubelet problem: Nov 01 23:27:59 kubernetes-upgrade-231829 kubelet[13186]: E1101 23:27:59.648961   13186 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.396392  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:00 kubernetes-upgrade-231829 kubelet[13197]: E1101 23:28:00.394164   13197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.396869  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:01 kubernetes-upgrade-231829 kubelet[13209]: E1101 23:28:01.147995   13209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.397406  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:01 kubernetes-upgrade-231829 kubelet[13220]: E1101 23:28:01.893499   13220 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.397752  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:02 kubernetes-upgrade-231829 kubelet[13232]: E1101 23:28:02.650517   13232 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.398105  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:03 kubernetes-upgrade-231829 kubelet[13242]: E1101 23:28:03.405628   13242 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.398458  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:04 kubernetes-upgrade-231829 kubelet[13252]: E1101 23:28:04.143793   13252 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.398807  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:04 kubernetes-upgrade-231829 kubelet[13262]: E1101 23:28:04.894747   13262 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.399162  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:05 kubernetes-upgrade-231829 kubelet[13273]: E1101 23:28:05.638910   13273 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.399536  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:06 kubernetes-upgrade-231829 kubelet[13283]: E1101 23:28:06.389853   13283 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.399896  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:07 kubernetes-upgrade-231829 kubelet[13295]: E1101 23:28:07.139644   13295 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W1101 23:28:08.400255  185407 logs.go:138] Found kubelet problem: Nov 01 23:28:07 kubernetes-upgrade-231829 kubelet[13305]: E1101 23:28:07.888182   13305 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.400376  185407 logs.go:123] Gathering logs for dmesg ...
	I1101 23:28:08.400390  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 23:28:08.419266  185407 logs.go:123] Gathering logs for describe nodes ...
	I1101 23:28:08.419303  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 23:28:08.481115  185407 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 23:28:08.481145  185407 logs.go:123] Gathering logs for containerd ...
	I1101 23:28:08.481159  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1101 23:28:08.548695  185407 logs.go:123] Gathering logs for container status ...
	I1101 23:28:08.548739  185407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 23:28:08.576173  185407 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1101 23:28:08.576217  185407 out.go:239] * 
	W1101 23:28:08.576460  185407 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:28:08.576497  185407 out.go:239] * 
	W1101 23:28:08.577815  185407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 23:28:08.580409  185407 out.go:177] X Problems detected in kubelet:
	I1101 23:28:08.581740  185407 out.go:177]   Nov 01 23:27:18 kubernetes-upgrade-231829 kubelet[12586]: E1101 23:27:18.390338   12586 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.583235  185407 out.go:177]   Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12597]: E1101 23:27:19.139874   12597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.584610  185407 out.go:177]   Nov 01 23:27:19 kubernetes-upgrade-231829 kubelet[12609]: E1101 23:27:19.896066   12609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I1101 23:28:08.588577  185407 out.go:177] 
	W1101 23:28:08.590415  185407 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1021-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W1101 23:26:12.252784   11479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 23:28:08.590549  185407 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1101 23:28:08.590624  185407 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1101 23:28:08.593189  185407 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-11-01 23:19:25 UTC, end at Tue 2022-11-01 23:28:10 UTC. --
	Nov 01 23:26:11 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:11.996601374Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.014007067Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.014056391Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.034317217Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.034363727Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.052501546Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.052558520Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.069015498Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.069068725Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.085464889Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.085528733Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.101303770Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.101352296Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.116933699Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.116988525Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.133484278Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.133543323Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.148874677Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.148921877Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.166118710Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.166175024Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.181804564Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.181855762Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.199451104Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Nov 01 23:26:12 kubernetes-upgrade-231829 containerd[494]: time="2022-11-01T23:26:12.199510017Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +1.030036] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000006] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000001] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000001] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[Nov 1 23:23] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000005] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000002] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000001] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +4.095695] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000006] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000002] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000001] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +8.187325] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000008] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000001] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f77e7967cd66
	[  +0.000001] ll header: 00000000: 02 42 08 b0 75 71 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> kernel <==
	*  23:28:10 up  1:10,  0 users,  load average: 1.15, 1.84, 1.85
	Linux kubernetes-upgrade-231829 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-11-01 23:19:25 UTC, end at Tue 2022-11-01 23:28:10 UTC. --
	Nov 01 23:28:07 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 01 23:28:07 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Nov 01 23:28:07 kubernetes-upgrade-231829 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:07 kubernetes-upgrade-231829 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:07 kubernetes-upgrade-231829 kubelet[13305]: E1101 23:28:07.888182   13305 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 01 23:28:07 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 01 23:28:07 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 01 23:28:08 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Nov 01 23:28:08 kubernetes-upgrade-231829 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:08 kubernetes-upgrade-231829 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:08 kubernetes-upgrade-231829 kubelet[13455]: E1101 23:28:08.649530   13455 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 01 23:28:08 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 01 23:28:08 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 01 23:28:09 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Nov 01 23:28:09 kubernetes-upgrade-231829 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:09 kubernetes-upgrade-231829 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:09 kubernetes-upgrade-231829 kubelet[13475]: E1101 23:28:09.369498   13475 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 01 23:28:09 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 01 23:28:09 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 01 23:28:10 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Nov 01 23:28:10 kubernetes-upgrade-231829 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:10 kubernetes-upgrade-231829 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 01 23:28:10 kubernetes-upgrade-231829 kubelet[13622]: E1101 23:28:10.167361   13622 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Nov 01 23:28:10 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Nov 01 23:28:10 kubernetes-upgrade-231829 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 23:28:10.126243  241514 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-231829 -n kubernetes-upgrade-231829
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-231829 -n kubernetes-upgrade-231829: exit status 2 (384.726664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-231829" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-231829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-231829
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-231829: (2.115077982s)
--- FAIL: TestKubernetesUpgrade (583.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (528.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-231843 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-231843 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m48.728163234s)

                                                
                                                
-- stdout --
	* [calico-231843] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-231843 in cluster calico-231843
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 23:30:21.807871  272212 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:30:21.808044  272212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:30:21.808065  272212 out.go:309] Setting ErrFile to fd 2...
	I1101 23:30:21.808072  272212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:30:21.808309  272212 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:30:21.809199  272212 out.go:303] Setting JSON to false
	I1101 23:30:21.811269  272212 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4368,"bootTime":1667341054,"procs":873,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 23:30:21.811339  272212 start.go:126] virtualization: kvm guest
	I1101 23:30:21.814624  272212 out.go:177] * [calico-231843] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 23:30:21.816455  272212 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 23:30:21.816397  272212 notify.go:220] Checking for updates...
	I1101 23:30:21.819356  272212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 23:30:21.821353  272212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:30:21.822989  272212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 23:30:21.824710  272212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 23:30:21.826883  272212 config.go:180] Loaded profile config "cilium-231843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:30:21.827039  272212 config.go:180] Loaded profile config "default-k8s-diff-port-232727": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:30:21.827155  272212 config.go:180] Loaded profile config "kindnet-231841": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:30:21.827209  272212 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 23:30:21.861036  272212 docker.go:137] docker version: linux-20.10.21
	I1101 23:30:21.861142  272212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:30:21.981904  272212 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-01 23:30:21.887456525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:30:21.982069  272212 docker.go:254] overlay module found
	I1101 23:30:21.984998  272212 out.go:177] * Using the docker driver based on user configuration
	I1101 23:30:21.986518  272212 start.go:282] selected driver: docker
	I1101 23:30:21.986537  272212 start.go:808] validating driver "docker" against <nil>
	I1101 23:30:21.986555  272212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 23:30:21.988246  272212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:30:22.102463  272212 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-01 23:30:22.014152094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:30:22.102639  272212 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 23:30:22.102860  272212 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 23:30:22.106135  272212 out.go:177] * Using Docker driver with root privileges
	I1101 23:30:22.107649  272212 cni.go:95] Creating CNI manager for "calico"
	I1101 23:30:22.107668  272212 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1101 23:30:22.107678  272212 start_flags.go:317] config:
	{Name:calico-231843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-231843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:30:22.109359  272212 out.go:177] * Starting control plane node calico-231843 in cluster calico-231843
	I1101 23:30:22.110956  272212 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 23:30:22.112722  272212 out.go:177] * Pulling base image ...
	I1101 23:30:22.114225  272212 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:30:22.114289  272212 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1101 23:30:22.114308  272212 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 23:30:22.114309  272212 cache.go:57] Caching tarball of preloaded images
	I1101 23:30:22.114627  272212 preload.go:174] Found /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1101 23:30:22.114660  272212 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on containerd
	I1101 23:30:22.114815  272212 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/config.json ...
	I1101 23:30:22.114846  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/config.json: {Name:mk37aff435bab4c4ffcbfedfdb6aaa16dbf56f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:22.145728  272212 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1101 23:30:22.145757  272212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1101 23:30:22.145774  272212 cache.go:208] Successfully downloaded all kic artifacts
	I1101 23:30:22.145813  272212 start.go:364] acquiring machines lock for calico-231843: {Name:mk5aa4347b5de4ce4c0d167741fb656e4429af77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 23:30:22.145955  272212 start.go:368] acquired machines lock for "calico-231843" in 116.956µs
	I1101 23:30:22.145988  272212 start.go:93] Provisioning new machine with config: &{Name:calico-231843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-231843 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 23:30:22.146100  272212 start.go:125] createHost starting for "" (driver="docker")
	I1101 23:30:22.149631  272212 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1101 23:30:22.149896  272212 start.go:159] libmachine.API.Create for "calico-231843" (driver="docker")
	I1101 23:30:22.149945  272212 client.go:168] LocalClient.Create starting
	I1101 23:30:22.150005  272212 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem
	I1101 23:30:22.150043  272212 main.go:134] libmachine: Decoding PEM data...
	I1101 23:30:22.150065  272212 main.go:134] libmachine: Parsing certificate...
	I1101 23:30:22.150122  272212 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem
	I1101 23:30:22.150163  272212 main.go:134] libmachine: Decoding PEM data...
	I1101 23:30:22.150180  272212 main.go:134] libmachine: Parsing certificate...
	I1101 23:30:22.150511  272212 cli_runner.go:164] Run: docker network inspect calico-231843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 23:30:22.175247  272212 cli_runner.go:211] docker network inspect calico-231843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 23:30:22.175312  272212 network_create.go:272] running [docker network inspect calico-231843] to gather additional debugging logs...
	I1101 23:30:22.175330  272212 cli_runner.go:164] Run: docker network inspect calico-231843
	W1101 23:30:22.207568  272212 cli_runner.go:211] docker network inspect calico-231843 returned with exit code 1
	I1101 23:30:22.207600  272212 network_create.go:275] error running [docker network inspect calico-231843]: docker network inspect calico-231843: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-231843
	I1101 23:30:22.207612  272212 network_create.go:277] output of [docker network inspect calico-231843]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-231843
	
	** /stderr **
	I1101 23:30:22.207666  272212 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:30:22.250107  272212 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b9c8e174cce4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d2:38:dc:bb}}
	I1101 23:30:22.251384  272212 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-fc1228290d01 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e4:a2:b0:46}}
	I1101 23:30:22.252593  272212 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-1f2ffdb93515 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:aa:2a:e3:32}}
	I1101 23:30:22.253499  272212 network.go:246] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-46d17ac800d2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:40:a6:c8:4c}}
	I1101 23:30:22.254862  272212 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc0000144f0] misses:0}
	I1101 23:30:22.254899  272212 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1101 23:30:22.254914  272212 network_create.go:115] attempt to create docker network calico-231843 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 23:30:22.254967  272212 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-231843 calico-231843
	I1101 23:30:22.349566  272212 network_create.go:99] docker network calico-231843 192.168.85.0/24 created
	I1101 23:30:22.349596  272212 kic.go:106] calculated static IP "192.168.85.2" for the "calico-231843" container
	I1101 23:30:22.349672  272212 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 23:30:22.381213  272212 cli_runner.go:164] Run: docker volume create calico-231843 --label name.minikube.sigs.k8s.io=calico-231843 --label created_by.minikube.sigs.k8s.io=true
	I1101 23:30:22.407203  272212 oci.go:103] Successfully created a docker volume calico-231843
	I1101 23:30:22.407287  272212 cli_runner.go:164] Run: docker run --rm --name calico-231843-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-231843 --entrypoint /usr/bin/test -v calico-231843:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1101 23:30:23.063743  272212 oci.go:107] Successfully prepared a docker volume calico-231843
	I1101 23:30:23.063801  272212 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:30:23.063823  272212 kic.go:179] Starting extracting preloaded images to volume ...
	I1101 23:30:23.063879  272212 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-231843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 23:30:29.101202  272212 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-231843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (6.037243371s)
	I1101 23:30:29.101240  272212 kic.go:188] duration metric: took 6.037413 seconds to extract preloaded images to volume
	W1101 23:30:29.101397  272212 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 23:30:29.101527  272212 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 23:30:29.213221  272212 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-231843 --name calico-231843 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-231843 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-231843 --network calico-231843 --ip 192.168.85.2 --volume calico-231843:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1101 23:30:29.670572  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Running}}
	I1101 23:30:29.698235  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Status}}
	I1101 23:30:29.724401  272212 cli_runner.go:164] Run: docker exec calico-231843 stat /var/lib/dpkg/alternatives/iptables
	I1101 23:30:29.783500  272212 oci.go:144] the created container "calico-231843" has a running status.
	I1101 23:30:29.783536  272212 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa...
	I1101 23:30:29.916706  272212 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 23:30:30.003059  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Status}}
	I1101 23:30:30.039325  272212 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 23:30:30.039351  272212 kic_runner.go:114] Args: [docker exec --privileged calico-231843 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 23:30:30.129864  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Status}}
	I1101 23:30:30.165494  272212 machine.go:88] provisioning docker machine ...
	I1101 23:30:30.165533  272212 ubuntu.go:169] provisioning hostname "calico-231843"
	I1101 23:30:30.165594  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:30.196919  272212 main.go:134] libmachine: Using SSH client type: native
	I1101 23:30:30.197121  272212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49433 <nil> <nil>}
	I1101 23:30:30.197143  272212 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-231843 && echo "calico-231843" | sudo tee /etc/hostname
	I1101 23:30:30.334513  272212 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-231843
	
	I1101 23:30:30.334631  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:30.368619  272212 main.go:134] libmachine: Using SSH client type: native
	I1101 23:30:30.368800  272212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49433 <nil> <nil>}
	I1101 23:30:30.368830  272212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-231843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-231843/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-231843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 23:30:30.487311  272212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1101 23:30:30.487348  272212 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
	I1101 23:30:30.487368  272212 ubuntu.go:177] setting up certificates
	I1101 23:30:30.487377  272212 provision.go:83] configureAuth start
	I1101 23:30:30.487473  272212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-231843
	I1101 23:30:30.511436  272212 provision.go:138] copyHostCerts
	I1101 23:30:30.511490  272212 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
	I1101 23:30:30.511497  272212 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
	I1101 23:30:30.511555  272212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
	I1101 23:30:30.511633  272212 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
	I1101 23:30:30.511642  272212 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
	I1101 23:30:30.511668  272212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
	I1101 23:30:30.511723  272212 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
	I1101 23:30:30.511732  272212 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
	I1101 23:30:30.511755  272212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
	I1101 23:30:30.511798  272212 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.calico-231843 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-231843]
	I1101 23:30:30.613379  272212 provision.go:172] copyRemoteCerts
	I1101 23:30:30.613432  272212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 23:30:30.613469  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:30.644340  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:30:30.738670  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 23:30:30.755856  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 23:30:30.779088  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1101 23:30:30.796429  272212 provision.go:86] duration metric: configureAuth took 309.035719ms
	I1101 23:30:30.796459  272212 ubuntu.go:193] setting minikube options for container-runtime
	I1101 23:30:30.796620  272212 config.go:180] Loaded profile config "calico-231843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:30:30.796633  272212 machine.go:91] provisioned docker machine in 631.115666ms
	I1101 23:30:30.796638  272212 client.go:171] LocalClient.Create took 8.646685942s
	I1101 23:30:30.796656  272212 start.go:167] duration metric: libmachine.API.Create for "calico-231843" took 8.646761648s
	I1101 23:30:30.796663  272212 start.go:300] post-start starting for "calico-231843" (driver="docker")
	I1101 23:30:30.796668  272212 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 23:30:30.796703  272212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 23:30:30.796744  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:30.825538  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:30:30.913986  272212 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 23:30:30.916954  272212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 23:30:30.916984  272212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 23:30:30.916995  272212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 23:30:30.917000  272212 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1101 23:30:30.917008  272212 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
	I1101 23:30:30.917056  272212 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
	I1101 23:30:30.917117  272212 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
	I1101 23:30:30.917187  272212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 23:30:30.923758  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:30:30.940762  272212 start.go:303] post-start completed in 144.089262ms
	I1101 23:30:30.941086  272212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-231843
	I1101 23:30:30.965681  272212 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/config.json ...
	I1101 23:30:30.965929  272212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:30:30.965978  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:30.990979  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:30:31.072284  272212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 23:30:31.076330  272212 start.go:128] duration metric: createHost completed in 8.930217297s
	I1101 23:30:31.076357  272212 start.go:83] releasing machines lock for "calico-231843", held for 8.930384982s
	I1101 23:30:31.076455  272212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-231843
	I1101 23:30:31.102996  272212 ssh_runner.go:195] Run: systemctl --version
	I1101 23:30:31.103041  272212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 23:30:31.103050  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:31.103085  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:30:31.136960  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:30:31.143836  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:30:31.219376  272212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1101 23:30:31.252512  272212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1101 23:30:31.261759  272212 docker.go:189] disabling docker service ...
	I1101 23:30:31.261849  272212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 23:30:31.278757  272212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 23:30:31.288399  272212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 23:30:31.376478  272212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 23:30:31.456834  272212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 23:30:31.465964  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 23:30:31.478728  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I1101 23:30:31.486370  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1101 23:30:31.495777  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1101 23:30:31.503335  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I1101 23:30:31.510868  272212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 23:30:31.517566  272212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 23:30:31.524009  272212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 23:30:31.612399  272212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1101 23:30:31.678568  272212 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1101 23:30:31.678629  272212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1101 23:30:31.682311  272212 start.go:472] Will wait 60s for crictl version
	I1101 23:30:31.682370  272212 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:30:31.712656  272212 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-11-01T23:30:31Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1101 23:30:42.759507  272212 ssh_runner.go:195] Run: sudo crictl version
	I1101 23:30:42.783878  272212 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.9
	RuntimeApiVersion:  v1alpha2
	I1101 23:30:42.783930  272212 ssh_runner.go:195] Run: containerd --version
	I1101 23:30:42.807365  272212 ssh_runner.go:195] Run: containerd --version
	I1101 23:30:42.831961  272212 out.go:177] * Preparing Kubernetes v1.25.3 on containerd 1.6.9 ...
	I1101 23:30:42.833669  272212 cli_runner.go:164] Run: docker network inspect calico-231843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 23:30:42.856610  272212 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 23:30:42.859837  272212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 23:30:42.869204  272212 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 23:30:42.869255  272212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:30:42.893371  272212 containerd.go:553] all images are preloaded for containerd runtime.
	I1101 23:30:42.893392  272212 containerd.go:467] Images already preloaded, skipping extraction
	I1101 23:30:42.893430  272212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 23:30:42.916193  272212 containerd.go:553] all images are preloaded for containerd runtime.
	I1101 23:30:42.916213  272212 cache_images.go:84] Images are preloaded, skipping loading
	I1101 23:30:42.916248  272212 ssh_runner.go:195] Run: sudo crictl info
	I1101 23:30:42.938970  272212 cni.go:95] Creating CNI manager for "calico"
	I1101 23:30:42.938996  272212 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 23:30:42.939011  272212 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-231843 NodeName:calico-231843 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1101 23:30:42.939141  272212 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-231843"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 23:30:42.939237  272212 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-231843 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-231843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1101 23:30:42.939278  272212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1101 23:30:42.946399  272212 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 23:30:42.946470  272212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 23:30:42.953175  272212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (506 bytes)
	I1101 23:30:42.965615  272212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 23:30:42.978436  272212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes)
	I1101 23:30:42.990609  272212 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 23:30:42.993368  272212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 23:30:43.002130  272212 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843 for IP: 192.168.85.2
	I1101 23:30:43.002235  272212 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
	I1101 23:30:43.002281  272212 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
	I1101 23:30:43.002322  272212 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/client.key
	I1101 23:30:43.002339  272212 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/client.crt with IP's: []
	I1101 23:30:43.207921  272212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/client.crt ...
	I1101 23:30:43.207948  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/client.crt: {Name:mk71b1909a28a0b13004f1033754a0c77ec999e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:43.208195  272212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/client.key ...
	I1101 23:30:43.208218  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/client.key: {Name:mkb529f7592a5b26e4800db9ebd59c8b9ed82c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:43.208349  272212 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.key.43b9df8c
	I1101 23:30:43.208367  272212 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 23:30:43.441890  272212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.crt.43b9df8c ...
	I1101 23:30:43.441922  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.crt.43b9df8c: {Name:mk3f7a3a8402d0d7e9e8be96f04a50085e470b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:43.442127  272212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.key.43b9df8c ...
	I1101 23:30:43.442147  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.key.43b9df8c: {Name:mk2f2de26a18b19f229df523fd98852acebe901a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:43.442258  272212 certs.go:320] copying /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.crt
	I1101 23:30:43.442316  272212 certs.go:324] copying /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.key
	I1101 23:30:43.442359  272212 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.key
	I1101 23:30:43.442373  272212 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.crt with IP's: []
	I1101 23:30:43.798362  272212 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.crt ...
	I1101 23:30:43.798392  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.crt: {Name:mk30409e96eade643c6b1b4918c42c5ec4dc9091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:43.798589  272212 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.key ...
	I1101 23:30:43.798602  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.key: {Name:mkbc0d4f34a9fab89be8e6e537b52b4abb0dbe5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:30:43.798801  272212 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
	W1101 23:30:43.798838  272212 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
	I1101 23:30:43.798850  272212 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 23:30:43.798869  272212 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
	I1101 23:30:43.798893  272212 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
	I1101 23:30:43.798920  272212 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
	I1101 23:30:43.798956  272212 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
	I1101 23:30:43.799486  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 23:30:43.818226  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 23:30:43.835051  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 23:30:43.852310  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/calico-231843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 23:30:43.869081  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 23:30:43.885567  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 23:30:43.901843  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 23:30:43.918257  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 23:30:43.934457  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
	I1101 23:30:43.950852  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
	I1101 23:30:43.967455  272212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 23:30:43.983968  272212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 23:30:43.996101  272212 ssh_runner.go:195] Run: openssl version
	I1101 23:30:44.000600  272212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
	I1101 23:30:44.007744  272212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
	I1101 23:30:44.010521  272212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  1 22:50 /usr/share/ca-certificates/128402.pem
	I1101 23:30:44.010561  272212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
	I1101 23:30:44.015136  272212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 23:30:44.021938  272212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 23:30:44.029001  272212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:30:44.031929  272212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:30:44.031962  272212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 23:30:44.036482  272212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 23:30:44.043498  272212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
	I1101 23:30:44.050515  272212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
	I1101 23:30:44.053359  272212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  1 22:50 /usr/share/ca-certificates/12840.pem
	I1101 23:30:44.053400  272212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
	I1101 23:30:44.058162  272212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
	I1101 23:30:44.064910  272212 kubeadm.go:396] StartCluster: {Name:calico-231843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-231843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 23:30:44.064990  272212 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1101 23:30:44.065021  272212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 23:30:44.089545  272212 cri.go:87] found id: ""
	I1101 23:30:44.089612  272212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 23:30:44.096731  272212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 23:30:44.103310  272212 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1101 23:30:44.103359  272212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 23:30:44.109777  272212 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 23:30:44.109834  272212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 23:30:44.153013  272212 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1101 23:30:44.153117  272212 kubeadm.go:317] [preflight] Running pre-flight checks
	I1101 23:30:44.183657  272212 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1101 23:30:44.183785  272212 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1101 23:30:44.183849  272212 kubeadm.go:317] OS: Linux
	I1101 23:30:44.183929  272212 kubeadm.go:317] CGROUPS_CPU: enabled
	I1101 23:30:44.184008  272212 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1101 23:30:44.184067  272212 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1101 23:30:44.184131  272212 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1101 23:30:44.184197  272212 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1101 23:30:44.184269  272212 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1101 23:30:44.184312  272212 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1101 23:30:44.184397  272212 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1101 23:30:44.184479  272212 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1101 23:30:44.263514  272212 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 23:30:44.263641  272212 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 23:30:44.263776  272212 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 23:30:44.379192  272212 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 23:30:44.382333  272212 out.go:204]   - Generating certificates and keys ...
	I1101 23:30:44.382495  272212 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1101 23:30:44.382590  272212 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1101 23:30:44.533893  272212 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 23:30:44.771005  272212 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1101 23:30:44.931568  272212 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1101 23:30:45.351679  272212 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1101 23:30:45.597890  272212 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1101 23:30:45.598129  272212 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-231843 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 23:30:45.743328  272212 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1101 23:30:45.743552  272212 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-231843 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 23:30:46.002713  272212 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 23:30:46.096305  272212 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 23:30:46.269783  272212 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1101 23:30:46.269918  272212 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 23:30:46.405197  272212 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 23:30:46.448782  272212 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 23:30:46.666121  272212 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 23:30:46.727121  272212 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 23:30:46.740178  272212 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 23:30:46.741253  272212 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 23:30:46.741336  272212 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1101 23:30:46.833523  272212 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 23:30:46.835357  272212 out.go:204]   - Booting up control plane ...
	I1101 23:30:46.835528  272212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 23:30:46.837481  272212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 23:30:46.838553  272212 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 23:30:46.839538  272212 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 23:30:46.841797  272212 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 23:30:54.953722  272212 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.111795 seconds
	I1101 23:30:54.953896  272212 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 23:30:55.134148  272212 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 23:30:55.914397  272212 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 23:30:55.914619  272212 kubeadm.go:317] [mark-control-plane] Marking the node calico-231843 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 23:30:56.602746  272212 kubeadm.go:317] [bootstrap-token] Using token: tdtx1a.from8rp3i9d8jt10
	I1101 23:30:56.653727  272212 out.go:204]   - Configuring RBAC rules ...
	I1101 23:30:56.653912  272212 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 23:30:56.659462  272212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 23:30:56.772148  272212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 23:30:56.775919  272212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 23:30:56.832777  272212 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 23:30:56.845745  272212 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 23:30:56.867569  272212 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 23:30:57.496770  272212 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1101 23:30:57.781673  272212 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1101 23:30:57.782633  272212 kubeadm.go:317] 
	I1101 23:30:57.782707  272212 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1101 23:30:57.782714  272212 kubeadm.go:317] 
	I1101 23:30:57.782774  272212 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1101 23:30:57.782778  272212 kubeadm.go:317] 
	I1101 23:30:57.782799  272212 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1101 23:30:57.782844  272212 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 23:30:57.782884  272212 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 23:30:57.782887  272212 kubeadm.go:317] 
	I1101 23:30:57.782929  272212 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1101 23:30:57.782932  272212 kubeadm.go:317] 
	I1101 23:30:57.782970  272212 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 23:30:57.782973  272212 kubeadm.go:317] 
	I1101 23:30:57.783015  272212 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1101 23:30:57.783074  272212 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 23:30:57.783127  272212 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 23:30:57.783132  272212 kubeadm.go:317] 
	I1101 23:30:57.783216  272212 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 23:30:57.783284  272212 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1101 23:30:57.783290  272212 kubeadm.go:317] 
	I1101 23:30:57.783355  272212 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token tdtx1a.from8rp3i9d8jt10 \
	I1101 23:30:57.783519  272212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:035b63f088f323cab437251192a32166cf4377fef2aef8dc417cb1e55982412e \
	I1101 23:30:57.783561  272212 kubeadm.go:317] 	--control-plane 
	I1101 23:30:57.783567  272212 kubeadm.go:317] 
	I1101 23:30:57.783657  272212 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1101 23:30:57.783662  272212 kubeadm.go:317] 
	I1101 23:30:57.783750  272212 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token tdtx1a.from8rp3i9d8jt10 \
	I1101 23:30:57.783859  272212 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:035b63f088f323cab437251192a32166cf4377fef2aef8dc417cb1e55982412e 
	I1101 23:30:57.787558  272212 kubeadm.go:317] W1101 23:30:44.144450     750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I1101 23:30:57.787774  272212 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1101 23:30:57.787932  272212 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 23:30:57.787975  272212 cni.go:95] Creating CNI manager for "calico"
	I1101 23:30:57.791076  272212 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1101 23:30:57.794464  272212 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1101 23:30:57.794486  272212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1101 23:30:57.813807  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 23:30:59.227061  272212 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.413195841s)
	I1101 23:30:59.227109  272212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 23:30:59.227199  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:30:59.227198  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=65bfd3dc2bf9824cf305579b01895f56b2ba9210 minikube.k8s.io/name=calico-231843 minikube.k8s.io/updated_at=2022_11_01T23_30_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:30:59.335804  272212 ops.go:34] apiserver oom_adj: -16
	I1101 23:30:59.335838  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:30:59.929352  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:00.429103  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:00.928905  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:01.428785  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:01.929355  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:02.428841  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:02.928909  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:03.429366  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:03.928928  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:04.428766  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:04.929365  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:05.429201  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:05.928962  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:06.429041  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:06.929782  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:07.429002  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:07.929751  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:08.429325  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:08.929333  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:09.428966  272212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 23:31:09.533161  272212 kubeadm.go:1067] duration metric: took 10.306022054s to wait for elevateKubeSystemPrivileges.
	I1101 23:31:09.533196  272212 kubeadm.go:398] StartCluster complete in 25.468291359s
	I1101 23:31:09.533219  272212 settings.go:142] acquiring lock: {Name:mk15316af474a840de6d06c1a5891b6bc5e64510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:31:09.533356  272212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:31:09.535046  272212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/kubeconfig: {Name:mk05c0f2e138ac359064389ca5eb4fadba1c406f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 23:31:10.115014  272212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-231843" rescaled to 1
	I1101 23:31:10.115097  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 23:31:10.115118  272212 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I1101 23:31:10.115162  272212 addons.go:65] Setting storage-provisioner=true in profile "calico-231843"
	I1101 23:31:10.115180  272212 addons.go:153] Setting addon storage-provisioner=true in "calico-231843"
	W1101 23:31:10.115187  272212 addons.go:162] addon storage-provisioner should already be in state true
	I1101 23:31:10.115089  272212 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1101 23:31:10.117625  272212 out.go:177] * Verifying Kubernetes components...
	I1101 23:31:10.115232  272212 host.go:66] Checking if "calico-231843" exists ...
	I1101 23:31:10.115374  272212 addons.go:65] Setting default-storageclass=true in profile "calico-231843"
	I1101 23:31:10.115487  272212 config.go:180] Loaded profile config "calico-231843": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:31:10.119221  272212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:31:10.119244  272212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-231843"
	I1101 23:31:10.119596  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Status}}
	I1101 23:31:10.119774  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Status}}
	I1101 23:31:10.167149  272212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 23:31:10.169344  272212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 23:31:10.169375  272212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 23:31:10.169430  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:31:10.179093  272212 addons.go:153] Setting addon default-storageclass=true in "calico-231843"
	W1101 23:31:10.179120  272212 addons.go:162] addon default-storageclass should already be in state true
	I1101 23:31:10.179148  272212 host.go:66] Checking if "calico-231843" exists ...
	I1101 23:31:10.179633  272212 cli_runner.go:164] Run: docker container inspect calico-231843 --format={{.State.Status}}
	I1101 23:31:10.203624  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:31:10.206001  272212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 23:31:10.206025  272212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 23:31:10.206069  272212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231843
	I1101 23:31:10.245696  272212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/calico-231843/id_rsa Username:docker}
	I1101 23:31:10.337343  272212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 23:31:10.338020  272212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 23:31:10.338467  272212 node_ready.go:35] waiting up to 5m0s for node "calico-231843" to be "Ready" ...
	I1101 23:31:10.341547  272212 node_ready.go:49] node "calico-231843" has status "Ready":"True"
	I1101 23:31:10.341570  272212 node_ready.go:38] duration metric: took 3.073129ms waiting for node "calico-231843" to be "Ready" ...
	I1101 23:31:10.341581  272212 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:31:10.356639  272212 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace to be "Ready" ...
	I1101 23:31:10.439765  272212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 23:31:12.061658  272212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.723592291s)
	I1101 23:31:12.061747  272212 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.724366989s)
	I1101 23:31:12.061767  272212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.621968407s)
	I1101 23:31:12.061776  272212 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I1101 23:31:12.063818  272212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 23:31:12.065424  272212 addons.go:414] enableAddons completed in 1.95030592s
	I1101 23:31:12.426100  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:14.427558  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:16.922861  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:18.922907  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:21.422525  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:23.927476  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:26.423302  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:28.925469  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:31.422711  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:33.922294  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:35.923739  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:38.422337  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:40.423876  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:42.922808  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:45.423231  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:47.922714  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:49.924492  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:52.422773  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:54.422906  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:56.923422  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:31:59.422837  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:01.423228  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:03.922674  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:05.923172  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:07.923361  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:09.924579  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:12.423495  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:14.423961  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:16.922786  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:18.923123  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:21.422614  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:23.424290  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:25.922929  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:27.923298  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:29.927023  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:32.422912  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:34.422978  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:36.922807  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:39.422726  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:41.423149  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:43.424094  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:45.425254  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:47.923299  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:49.924490  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:52.422655  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:54.922590  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:56.922954  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:32:59.422897  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:01.922946  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:04.423100  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:06.923235  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:09.422744  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:11.422944  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:13.922547  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:15.922996  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:17.923165  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:19.924226  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:22.422259  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:24.422936  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:26.423601  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:28.923371  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:30.923483  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:33.423320  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:35.423971  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:37.424177  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:39.922963  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:42.423011  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:44.923302  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:46.923378  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:49.422729  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:51.424032  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:53.922595  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:55.923953  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:33:58.422641  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:00.423227  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:02.923747  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:05.422452  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:07.422523  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:09.923560  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:12.423166  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:14.922203  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:17.422380  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:19.423724  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:21.922351  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:23.923102  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:26.422538  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:28.922639  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:31.423011  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:33.922820  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:35.925158  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:38.422430  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:40.423115  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:42.423266  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:44.922832  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:47.422818  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:49.422888  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:51.922823  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:53.922885  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:56.423281  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:34:58.922118  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:00.923030  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:03.423267  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:05.922788  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:08.422699  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:10.423601  272212 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:10.428214  272212 pod_ready.go:81] duration metric: took 4m0.071544701s waiting for pod "calico-kube-controllers-7df895d496-ntx82" in "kube-system" namespace to be "Ready" ...
	E1101 23:35:10.428239  272212 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1101 23:35:10.428248  272212 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-m8tqw" in "kube-system" namespace to be "Ready" ...
	I1101 23:35:12.439770  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:14.939834  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:17.439562  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:19.940441  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:22.439538  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:24.940038  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:27.439741  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:29.440105  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:31.939024  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:33.939225  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:35.939541  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:37.940017  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:39.940682  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:42.439295  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:44.940489  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:47.439578  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:49.439954  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:51.939357  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:53.939864  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:55.940294  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:35:58.439211  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:00.440544  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:02.939806  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:05.440014  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:07.940040  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:09.940989  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:12.439030  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:14.941390  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:17.438883  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:19.438944  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:21.439877  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:23.939595  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:26.439762  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:28.939186  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:30.940037  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:33.439231  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:35.939827  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:38.438505  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:40.438591  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:42.438838  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:44.438989  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:46.439501  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:48.939860  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:51.441941  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:53.941184  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:56.438499  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:36:58.439792  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:00.939336  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:02.940117  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:04.940708  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:07.439966  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:09.440611  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:11.940454  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:13.942296  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:16.439209  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:18.439568  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:20.439765  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:22.940041  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:24.940640  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:27.439545  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:29.941271  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:32.439305  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:34.940302  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:37.439178  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:39.940199  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:42.439135  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:44.941129  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:47.439195  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:49.940121  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:52.439604  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:54.939120  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:56.939316  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:37:58.939813  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:01.438998  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:03.439590  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:05.939527  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:07.939830  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:09.940943  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:12.439453  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:14.939487  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:16.939733  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:18.940072  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:21.439119  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:23.940092  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:26.439007  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:28.440229  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:30.938961  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:32.939154  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:34.939763  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:37.439743  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:39.440588  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:41.939582  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:44.439432  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:46.938667  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:48.939338  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:51.438712  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:53.439954  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:55.938908  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:38:58.439567  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:39:00.939552  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:39:03.439743  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:39:05.938683  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:39:08.439386  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:39:10.439955  272212 pod_ready.go:102] pod "calico-node-m8tqw" in "kube-system" namespace has status "Ready":"False"
	I1101 23:39:10.444783  272212 pod_ready.go:81] duration metric: took 4m0.016521299s waiting for pod "calico-node-m8tqw" in "kube-system" namespace to be "Ready" ...
	E1101 23:39:10.444806  272212 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1101 23:39:10.444823  272212 pod_ready.go:38] duration metric: took 8m0.103228684s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 23:39:10.447572  272212 out.go:177] 
	W1101 23:39:10.449387  272212 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1101 23:39:10.449411  272212 out.go:239] * 
	* 
	W1101 23:39:10.450279  272212 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 23:39:10.451517  272212 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (528.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (364.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:32:59.224471   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135692781s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:33:23.249587   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12437759s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126983961s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:33:45.625747   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134552469s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1218171s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:34:42.407216   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124374322s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:34:45.170331   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123662376s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:35:09.928655   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:09.933919   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:09.944186   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:09.964487   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:10.004717   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:10.084979   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:10.245323   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:10.566065   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:11.206551   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:12.487263   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:15.048240   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:35:20.168926   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133008305s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:35:30.409775   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:32.157292   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.162718   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.172960   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.193781   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.234051   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.314694   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.475348   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:32.795528   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:33.436426   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:34.717180   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:35:37.277682   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:36:01.783855   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123734759s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:36:13.119056   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:36:31.851623   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13213562s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:36:49.027446   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.032820   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.043048   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.063291   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.103567   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.183881   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.344358   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:49.664912   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:50.305912   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:36:51.586325   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119276173s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:37:59.224606   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119694134s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (364.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (302.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:35:50.890549   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:35:52.638822   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12154772s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125124578s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:36:29.466235   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130664954s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.104685571s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:36:54.079584   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
E1101 23:36:54.146759   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:36:59.267324   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
E1101 23:37:01.326994   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:37:09.507914   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124691834s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:37:29.011465   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:37:29.988556   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13308251s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:37:32.185851   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125491137s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:37:53.772580   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:38:10.949465   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140980821s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.113551286s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130853776s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:39:32.870591   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/cilium-231843/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:39:42.407325   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126967939s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1101 23:40:09.928615   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
E1101 23:40:32.157403   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default
E1101 23:40:37.612764   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/auto-231841/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-231841 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126799151s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (302.76s)

                                                
                                    

Test pass (249/277)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 26.53
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 9.66
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 1.27
16 TestDownloadOnly/DeleteAll 0.54
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
18 TestDownloadOnlyKic 3.56
19 TestBinaryMirror 0.83
20 TestOffline 69.72
22 TestAddons/Setup 145.49
24 TestAddons/parallel/Registry 18.25
25 TestAddons/parallel/Ingress 29.15
26 TestAddons/parallel/MetricsServer 6.31
27 TestAddons/parallel/HelmTiller 14.49
29 TestAddons/parallel/CSI 43.24
30 TestAddons/parallel/Headlamp 9.29
31 TestAddons/parallel/CloudSpanner 5.33
33 TestAddons/serial/GCPAuth 41.51
34 TestAddons/StoppedEnableDisable 20.19
35 TestCertOptions 34.45
36 TestCertExpiration 222.12
38 TestForceSystemdFlag 44.22
39 TestForceSystemdEnv 37.8
40 TestKVMDriverInstallOrUpdate 8.58
44 TestErrorSpam/setup 22.26
45 TestErrorSpam/start 0.92
46 TestErrorSpam/status 1.06
47 TestErrorSpam/pause 1.55
48 TestErrorSpam/unpause 1.53
49 TestErrorSpam/stop 1.48
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 55.47
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 15.58
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.09
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.19
61 TestFunctional/serial/CacheCmd/cache/add_local 2.24
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.13
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
69 TestFunctional/serial/ExtraConfig 38.84
70 TestFunctional/serial/ComponentHealth 0.06
71 TestFunctional/serial/LogsCmd 1.11
72 TestFunctional/serial/LogsFileCmd 1.12
74 TestFunctional/parallel/ConfigCmd 0.56
75 TestFunctional/parallel/DashboardCmd 13.51
76 TestFunctional/parallel/DryRun 0.61
77 TestFunctional/parallel/InternationalLanguage 0.28
78 TestFunctional/parallel/StatusCmd 1.17
81 TestFunctional/parallel/ServiceCmd 12.1
82 TestFunctional/parallel/ServiceCmdConnect 8.73
83 TestFunctional/parallel/AddonsCmd 0.22
84 TestFunctional/parallel/PersistentVolumeClaim 33.71
86 TestFunctional/parallel/SSHCmd 0.66
87 TestFunctional/parallel/CpCmd 1.5
88 TestFunctional/parallel/MySQL 24.26
89 TestFunctional/parallel/FileSync 0.43
90 TestFunctional/parallel/CertSync 2.42
94 TestFunctional/parallel/NodeLabels 0.06
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
98 TestFunctional/parallel/License 0.3
99 TestFunctional/parallel/Version/short 0.09
100 TestFunctional/parallel/Version/components 1.06
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
105 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
106 TestFunctional/parallel/ImageCommands/Setup 1.48
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
108 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
109 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.94
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.67
116 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.63
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
118 TestFunctional/parallel/ProfileCmd/profile_list 0.51
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/MountCmd/any-port 19.18
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.91
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.23
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
131 TestFunctional/parallel/MountCmd/specific-port 2.49
132 TestFunctional/delete_addon-resizer_images 0.08
133 TestFunctional/delete_my-image_image 0.02
134 TestFunctional/delete_minikube_cached_images 0.02
137 TestIngressAddonLegacy/StartLegacyK8sCluster 72.53
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.18
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.37
141 TestIngressAddonLegacy/serial/ValidateIngressAddons 42.38
144 TestJSONOutput/start/Command 46.42
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.67
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.6
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 5.82
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.27
169 TestKicCustomNetwork/create_custom_network 33.83
170 TestKicCustomNetwork/use_default_bridge_network 27.48
171 TestKicExistingNetwork 30.67
172 TestKicCustomSubnet 29
173 TestMainNoArgs 0.07
174 TestMinikubeProfile 63.23
177 TestMountStart/serial/StartWithMountFirst 4.75
178 TestMountStart/serial/VerifyMountFirst 0.31
179 TestMountStart/serial/StartWithMountSecond 4.97
180 TestMountStart/serial/VerifyMountSecond 0.31
181 TestMountStart/serial/DeleteFirst 1.7
182 TestMountStart/serial/VerifyMountPostDelete 0.32
183 TestMountStart/serial/Stop 1.24
184 TestMountStart/serial/RestartStopped 6.57
185 TestMountStart/serial/VerifyMountPostStop 0.32
188 TestMultiNode/serial/FreshStart2Nodes 88.96
189 TestMultiNode/serial/DeployApp2Nodes 4.49
190 TestMultiNode/serial/PingHostFrom2Pods 0.87
191 TestMultiNode/serial/AddNode 31.27
192 TestMultiNode/serial/ProfileList 0.34
193 TestMultiNode/serial/CopyFile 11.35
194 TestMultiNode/serial/StopNode 2.34
195 TestMultiNode/serial/StartAfterStop 30.73
196 TestMultiNode/serial/RestartKeepsNodes 155.19
197 TestMultiNode/serial/DeleteNode 4.91
198 TestMultiNode/serial/StopMultiNode 40.05
199 TestMultiNode/serial/RestartMultiNode 96.32
200 TestMultiNode/serial/ValidateNameConflict 26.12
207 TestScheduledStopUnix 99.44
210 TestInsufficientStorage 15.37
211 TestRunningBinaryUpgrade 148.04
214 TestMissingContainerUpgrade 138.72
215 TestStoppedBinaryUpgrade/Setup 1.2
217 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
218 TestNoKubernetes/serial/StartWithK8s 33.84
219 TestStoppedBinaryUpgrade/Upgrade 154.79
220 TestNoKubernetes/serial/StartWithStopK8s 23.68
221 TestNoKubernetes/serial/Start 5.75
222 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
223 TestNoKubernetes/serial/ProfileList 1.78
224 TestNoKubernetes/serial/Stop 3.3
225 TestNoKubernetes/serial/StartNoArgs 6.79
234 TestPause/serial/Start 61.65
235 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
236 TestPause/serial/SecondStartNoReconfiguration 16.48
237 TestPause/serial/Pause 0.92
238 TestPause/serial/VerifyStatus 0.58
239 TestPause/serial/Unpause 0.97
240 TestPause/serial/PauseAgain 1.43
241 TestPause/serial/DeletePaused 3.81
242 TestPause/serial/VerifyDeletedResources 0.79
243 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
251 TestNetworkPlugins/group/false 1.15
256 TestStartStop/group/old-k8s-version/serial/FirstStart 121.26
258 TestStartStop/group/no-preload/serial/FirstStart 49.13
259 TestStartStop/group/no-preload/serial/DeployApp 9.34
260 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.61
261 TestStartStop/group/no-preload/serial/Stop 20.01
262 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
263 TestStartStop/group/no-preload/serial/SecondStart 334.48
264 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.57
266 TestStartStop/group/old-k8s-version/serial/Stop 20.06
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
268 TestStartStop/group/old-k8s-version/serial/SecondStart 431.94
270 TestStartStop/group/embed-certs/serial/FirstStart 43.57
271 TestStartStop/group/embed-certs/serial/DeployApp 9.31
272 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
273 TestStartStop/group/embed-certs/serial/Stop 20.03
274 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
275 TestStartStop/group/embed-certs/serial/SecondStart 314.86
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
279 TestStartStop/group/no-preload/serial/Pause 2.98
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.19
283 TestStartStop/group/newest-cni/serial/FirstStart 38.25
284 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
285 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.74
286 TestStartStop/group/default-k8s-diff-port/serial/Stop 20.06
287 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
288 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 570.83
289 TestStartStop/group/newest-cni/serial/DeployApp 0
290 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
291 TestStartStop/group/newest-cni/serial/Stop 1.26
292 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
293 TestStartStop/group/newest-cni/serial/SecondStart 29.98
294 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
295 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
296 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
297 TestStartStop/group/embed-certs/serial/Pause 3.38
298 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
300 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
301 TestStartStop/group/newest-cni/serial/Pause 3.15
302 TestNetworkPlugins/group/auto/Start 43.81
303 TestNetworkPlugins/group/kindnet/Start 60.51
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
307 TestStartStop/group/old-k8s-version/serial/Pause 3.12
308 TestNetworkPlugins/group/cilium/Start 110.25
309 TestNetworkPlugins/group/auto/KubeletFlags 0.35
310 TestNetworkPlugins/group/auto/NetCatPod 9.19
311 TestNetworkPlugins/group/auto/DNS 0.14
312 TestNetworkPlugins/group/auto/Localhost 0.13
313 TestNetworkPlugins/group/auto/HairPin 0.11
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
316 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
317 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
318 TestNetworkPlugins/group/kindnet/DNS 0.14
319 TestNetworkPlugins/group/kindnet/Localhost 0.13
320 TestNetworkPlugins/group/kindnet/HairPin 0.15
321 TestNetworkPlugins/group/enable-default-cni/Start 288.75
322 TestNetworkPlugins/group/cilium/ControllerPod 5.02
323 TestNetworkPlugins/group/cilium/KubeletFlags 0.34
324 TestNetworkPlugins/group/cilium/NetCatPod 10.8
325 TestNetworkPlugins/group/cilium/DNS 0.12
326 TestNetworkPlugins/group/cilium/Localhost 0.12
327 TestNetworkPlugins/group/cilium/HairPin 0.12
328 TestNetworkPlugins/group/bridge/Start 39.06
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
330 TestNetworkPlugins/group/bridge/NetCatPod 10.23
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
x
+
TestDownloadOnly/v1.16.0/json-events (26.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-224450 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-224450 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (26.527240762s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (26.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-224450
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-224450: exit status 85 (86.722367ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-224450 | jenkins | v1.27.1 | 01 Nov 22 22:44 UTC |          |
	|         | -p download-only-224450        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 22:44:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 22:44:50.990612   12852 out.go:296] Setting OutFile to fd 1 ...
	I1101 22:44:50.991096   12852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:44:50.991109   12852 out.go:309] Setting ErrFile to fd 2...
	I1101 22:44:50.991117   12852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:44:50.991387   12852 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	W1101 22:44:50.991623   12852 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15232-6112/.minikube/config/config.json: open /home/jenkins/minikube-integration/15232-6112/.minikube/config/config.json: no such file or directory
	I1101 22:44:50.992632   12852 out.go:303] Setting JSON to true
	I1101 22:44:50.993465   12852 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1637,"bootTime":1667341054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 22:44:50.993532   12852 start.go:126] virtualization: kvm guest
	I1101 22:44:50.996585   12852 out.go:97] [download-only-224450] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	W1101 22:44:50.996675   12852 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 22:44:50.996703   12852 notify.go:220] Checking for updates...
	I1101 22:44:50.998158   12852 out.go:169] MINIKUBE_LOCATION=15232
	I1101 22:44:50.999820   12852 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 22:44:51.001304   12852 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 22:44:51.002939   12852 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 22:44:51.004416   12852 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 22:44:51.007245   12852 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 22:44:51.007387   12852 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 22:44:51.033971   12852 docker.go:137] docker version: linux-20.10.21
	I1101 22:44:51.034054   12852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 22:44:52.031434   12852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-01 22:44:51.052464845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 22:44:52.031545   12852 docker.go:254] overlay module found
	I1101 22:44:52.033712   12852 out.go:97] Using the docker driver based on user configuration
	I1101 22:44:52.033735   12852 start.go:282] selected driver: docker
	I1101 22:44:52.033747   12852 start.go:808] validating driver "docker" against <nil>
	I1101 22:44:52.033829   12852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 22:44:52.154027   12852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-01 22:44:52.052795174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 22:44:52.154155   12852 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1101 22:44:52.154599   12852 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I1101 22:44:52.154704   12852 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 22:44:52.157064   12852 out.go:169] Using Docker driver with root privileges
	I1101 22:44:52.158566   12852 cni.go:95] Creating CNI manager for ""
	I1101 22:44:52.158589   12852 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 22:44:52.158607   12852 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1101 22:44:52.158617   12852 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1101 22:44:52.158621   12852 start_flags.go:312] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 22:44:52.158633   12852 start_flags.go:317] config:
	{Name:download-only-224450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-224450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 22:44:52.160305   12852 out.go:97] Starting control plane node download-only-224450 in cluster download-only-224450
	I1101 22:44:52.160326   12852 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 22:44:52.161812   12852 out.go:97] Pulling base image ...
	I1101 22:44:52.161837   12852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1101 22:44:52.161881   12852 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 22:44:52.180987   12852 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1101 22:44:52.181308   12852 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1101 22:44:52.181417   12852 image.go:120] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1101 22:44:52.272090   12852 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1101 22:44:52.272127   12852 cache.go:57] Caching tarball of preloaded images
	I1101 22:44:52.272303   12852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1101 22:44:52.274768   12852 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1101 22:44:52.274792   12852 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1101 22:44:52.388520   12852 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1101 22:44:58.215563   12852 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1101 22:44:58.215648   12852 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1101 22:44:59.077851   12852 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1101 22:44:59.078183   12852 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/download-only-224450/config.json ...
	I1101 22:44:59.078218   12852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/download-only-224450/config.json: {Name:mk8eb3f776fbfd0eb77f65a1cb48f9f225228b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 22:44:59.078395   12852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1101 22:44:59.078606   12852 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-224450"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-224450 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-224450 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.661193416s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-224450
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-224450: exit status 85 (1.269638783s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-224450 | jenkins | v1.27.1 | 01 Nov 22 22:44 UTC |          |
	|         | -p download-only-224450        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-224450 | jenkins | v1.27.1 | 01 Nov 22 22:45 UTC |          |
	|         | -p download-only-224450        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/01 22:45:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 22:45:17.606038   13167 out.go:296] Setting OutFile to fd 1 ...
	I1101 22:45:17.606149   13167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:45:17.606160   13167 out.go:309] Setting ErrFile to fd 2...
	I1101 22:45:17.606165   13167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:45:17.606277   13167 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	W1101 22:45:17.606411   13167 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15232-6112/.minikube/config/config.json: open /home/jenkins/minikube-integration/15232-6112/.minikube/config/config.json: no such file or directory
	I1101 22:45:17.606851   13167 out.go:303] Setting JSON to true
	I1101 22:45:17.607636   13167 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1664,"bootTime":1667341054,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 22:45:17.607692   13167 start.go:126] virtualization: kvm guest
	I1101 22:45:17.610305   13167 out.go:97] [download-only-224450] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 22:45:17.610387   13167 notify.go:220] Checking for updates...
	I1101 22:45:17.612047   13167 out.go:169] MINIKUBE_LOCATION=15232
	I1101 22:45:17.613800   13167 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 22:45:17.615317   13167 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 22:45:17.616912   13167 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 22:45:17.618629   13167 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 22:45:17.621473   13167 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 22:45:17.621856   13167 config.go:180] Loaded profile config "download-only-224450": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1101 22:45:17.621896   13167 start.go:716] api.Load failed for download-only-224450: filestore "download-only-224450": Docker machine "download-only-224450" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1101 22:45:17.621945   13167 driver.go:365] Setting default libvirt URI to qemu:///system
	W1101 22:45:17.621979   13167 start.go:716] api.Load failed for download-only-224450: filestore "download-only-224450": Docker machine "download-only-224450" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1101 22:45:17.648210   13167 docker.go:137] docker version: linux-20.10.21
	I1101 22:45:17.648274   13167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 22:45:17.748251   13167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-01 22:45:17.66519889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 22:45:17.748389   13167 docker.go:254] overlay module found
	I1101 22:45:17.750580   13167 out.go:97] Using the docker driver based on existing profile
	I1101 22:45:17.750600   13167 start.go:282] selected driver: docker
	I1101 22:45:17.750612   13167 start.go:808] validating driver "docker" against &{Name:download-only-224450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-224450 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 22:45:17.750784   13167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 22:45:17.841675   13167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-01 22:45:17.767908744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 22:45:17.842209   13167 cni.go:95] Creating CNI manager for ""
	I1101 22:45:17.842225   13167 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I1101 22:45:17.842236   13167 start_flags.go:317] config:
	{Name:download-only-224450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-224450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 22:45:17.844439   13167 out.go:97] Starting control plane node download-only-224450 in cluster download-only-224450
	I1101 22:45:17.844457   13167 cache.go:120] Beginning downloading kic base image for docker with containerd
	I1101 22:45:17.846257   13167 out.go:97] Pulling base image ...
	I1101 22:45:17.846278   13167 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 22:45:17.846383   13167 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1101 22:45:17.865838   13167 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1101 22:45:17.866045   13167 image.go:60] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1101 22:45:17.866065   13167 image.go:63] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory, skipping pull
	I1101 22:45:17.866070   13167 image.go:104] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in cache, skipping pull
	I1101 22:45:17.866083   13167 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
	I1101 22:45:17.957755   13167 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1101 22:45:17.957792   13167 cache.go:57] Caching tarball of preloaded images
	I1101 22:45:17.957981   13167 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1101 22:45:17.960495   13167 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1101 22:45:17.960515   13167 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1101 22:45:18.070581   13167 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-224450"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (1.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.54s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.54s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-224450
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-224529 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-224529 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.091298881s)
helpers_test.go:175: Cleaning up "download-docker-224529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-224529
--- PASS: TestDownloadOnlyKic (3.56s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-224532 --alsologtostderr --binary-mirror http://127.0.0.1:34841 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-224532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-224532
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (69.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-231601 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-231601 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m6.595748619s)
helpers_test.go:175: Cleaning up "offline-containerd-231601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-231601

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-231601: (3.128264414s)
--- PASS: TestOffline (69.72s)

                                                
                                    
x
+
TestAddons/Setup (145.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-224533 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-224533 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.49426255s)
--- PASS: TestAddons/Setup (145.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 8.561802ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-bg5c7" [983b5c3d-865e-41f7-ad02-9dc3126128ae] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007426902s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-dwzc9" [3e2a556f-a87f-46ed-9607-931562656487] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007550388s
addons_test.go:293: (dbg) Run:  kubectl --context addons-224533 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-224533 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-224533 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.494119778s)
addons_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 ip
2022/11/01 22:48:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-224533 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Done: kubectl --context addons-224533 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.604191665s)
addons_test.go:185: (dbg) Run:  kubectl --context addons-224533 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Non-zero exit: kubectl --context addons-224533 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (190.674645ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.102.159.181:443: connect: connection refused

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-224533 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:185: (dbg) Non-zero exit: kubectl --context addons-224533 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (160.14876ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.102.159.181:443: connect: connection refused

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-224533 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:198: (dbg) Run:  kubectl --context addons-224533 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [eea9dd2b-86f2-4351-a70c-71d9cf61e762] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [eea9dd2b-86f2-4351-a70c-71d9cf61e762] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [eea9dd2b-86f2-4351-a70c-71d9cf61e762] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.033279891s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context addons-224533 replace --force -f testdata/ingress-dns-example-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p addons-224533 addons disable ingress-dns --alsologtostderr -v=1: (1.555877167s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p addons-224533 addons disable ingress --alsologtostderr -v=1: (7.49793635s)
--- PASS: TestAddons/parallel/Ingress (29.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 8.803984ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-769cd898cd-nr569" [3e5cba6c-d5e0-4517-866f-b430eaf233b5] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007734267s
addons_test.go:368: (dbg) Run:  kubectl --context addons-224533 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: (dbg) Done: out/minikube-linux-amd64 -p addons-224533 addons disable metrics-server --alsologtostderr -v=1: (1.228234997s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 1.652291ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-gw65l" [8844d628-c071-4f9d-910a-c65141bddc9a] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.072722117s
addons_test.go:426: (dbg) Run:  kubectl --context addons-224533 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-224533 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.989435381s)
addons_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 18.958851ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-224533 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-224533 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-224533 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-224533 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [de1c7b06-afaf-492c-aa4a-a11a82124a6d] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [de1c7b06-afaf-492c-aa4a-a11a82124a6d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [de1c7b06-afaf-492c-aa4a-a11a82124a6d] Running
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.005683411s
addons_test.go:537: (dbg) Run:  kubectl --context addons-224533 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-224533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-224533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-224533 delete pod task-pv-pod
addons_test.go:553: (dbg) Run:  kubectl --context addons-224533 delete pvc hpvc

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Run:  kubectl --context addons-224533 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-224533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-224533 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [5c762293-17a1-4f16-8f1f-07352816f0c7] Pending
helpers_test.go:342: "task-pv-pod-restore" [5c762293-17a1-4f16-8f1f-07352816f0c7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [5c762293-17a1-4f16-8f1f-07352816f0c7] Running
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.00590619s
addons_test.go:579: (dbg) Run:  kubectl --context addons-224533 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-224533 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-224533 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-linux-amd64 -p addons-224533 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.784194032s)
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-224533 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-224533 --alsologtostderr -v=1: (1.279960724s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-7swxr" [8eea976d-dde8-4cdf-9bb8-40abca1d4dda] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-7swxr" [8eea976d-dde8-4cdf-9bb8-40abca1d4dda] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.005824621s
--- PASS: TestAddons/parallel/Headlamp (9.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-8lm2t" [0eba1152-0fba-4913-b6bb-ba419ae0ae45] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006066403s
addons_test.go:762: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-224533
--- PASS: TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (41.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-224533 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-224533 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ed90d5f6-bfb6-480f-bc20-19c4ad19eb37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [ed90d5f6-bfb6-480f-bc20-19c4ad19eb37] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006070215s
addons_test.go:625: (dbg) Run:  kubectl --context addons-224533 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-224533 describe sa gcp-auth-test
addons_test.go:675: (dbg) Run:  kubectl --context addons-224533 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-linux-amd64 -p addons-224533 addons disable gcp-auth --alsologtostderr -v=1: (6.085779784s)
addons_test.go:704: (dbg) Run:  out/minikube-linux-amd64 -p addons-224533 addons enable gcp-auth
addons_test.go:704: (dbg) Done: out/minikube-linux-amd64 -p addons-224533 addons enable gcp-auth: (2.141208451s)
addons_test.go:710: (dbg) Run:  kubectl --context addons-224533 apply -f testdata/private-image.yaml
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-pfr75" [e4bb2f31-1b15-4dfe-8b94-c8b4968b8d06] Pending
helpers_test.go:342: "private-image-5c86c669bd-pfr75" [e4bb2f31-1b15-4dfe-8b94-c8b4968b8d06] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-pfr75" [e4bb2f31-1b15-4dfe-8b94-c8b4968b8d06] Running
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 16.006817937s
addons_test.go:723: (dbg) Run:  kubectl --context addons-224533 apply -f testdata/private-image-eu.yaml
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-2bkt6" [e14ec01e-2c46-4eb2-a4f2-16d77efc850e] Pending
helpers_test.go:342: "private-image-eu-64c96f687b-2bkt6" [e14ec01e-2c46-4eb2-a4f2-16d77efc850e] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-2bkt6" [e14ec01e-2c46-4eb2-a4f2-16d77efc850e] Running
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.007329249s
--- PASS: TestAddons/serial/GCPAuth (41.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-224533
addons_test.go:135: (dbg) Done: out/minikube-linux-amd64 stop -p addons-224533: (20.001263114s)
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-224533
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-224533
--- PASS: TestAddons/StoppedEnableDisable (20.19s)

                                                
                                    
x
+
TestCertOptions (34.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-231938 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1101 23:19:42.407315   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-231938 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.698979429s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-231938 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-231938 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-231938 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-231938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-231938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-231938: (1.985247611s)
--- PASS: TestCertOptions (34.45s)

                                                
                                    
x
+
TestCertExpiration (222.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-231852 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-231852 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (25.320480031s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-231852 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-231852 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.535504609s)
helpers_test.go:175: Cleaning up "cert-expiration-231852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-231852
E1101 23:22:32.185522   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-231852: (2.264727794s)
--- PASS: TestCertExpiration (222.12s)

                                                
                                    
x
+
TestForceSystemdFlag (44.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-231915 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-231915 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.046762195s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-231915 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-231915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-231915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-231915: (2.770817439s)
--- PASS: TestForceSystemdFlag (44.22s)

                                                
                                    
x
+
TestForceSystemdEnv (37.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-231837 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-231837 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.329413982s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-231837 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-231837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-231837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-231837: (2.111912733s)
--- PASS: TestForceSystemdEnv (37.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.58s)

                                                
                                    
x
+
TestErrorSpam/setup (22.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-224956 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-224956 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-224956 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-224956 --driver=docker  --container-runtime=containerd: (22.256201963s)
--- PASS: TestErrorSpam/setup (22.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 start --dry-run
--- PASS: TestErrorSpam/start (0.92s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 stop: (1.242426508s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-224956 --log_dir /tmp/nospam-224956 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/test/nested/copy/12840/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225030 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-225030 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (55.468261028s)
--- PASS: TestFunctional/serial/StartWithProxy (55.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225030 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-225030 --alsologtostderr -v=8: (15.578336068s)
functional_test.go:656: soft start took 15.579012649s for "functional-225030" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-225030 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 cache add k8s.gcr.io/pause:3.1: (1.518167619s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 cache add k8s.gcr.io/pause:3.3: (1.497999347s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 cache add k8s.gcr.io/pause:latest: (1.169786316s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-225030 /tmp/TestFunctionalserialCacheCmdcacheadd_local1728688251/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cache add minikube-local-cache-test:functional-225030
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 cache add minikube-local-cache-test:functional-225030: (1.999060656s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cache delete minikube-local-cache-test:functional-225030
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-225030
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (334.926053ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 cache reload: (1.135458959s)
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 kubectl -- --context functional-225030 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-225030 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225030 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-225030 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.835560096s)
functional_test.go:754: restart took 38.835657165s for "functional-225030" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-225030 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 logs: (1.11145057s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 logs --file /tmp/TestFunctionalserialLogsFileCmd2854194173/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 logs --file /tmp/TestFunctionalserialLogsFileCmd2854194173/001/logs.txt: (1.119462722s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 config get cpus: exit status 14 (90.747893ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 config get cpus: exit status 14 (91.912335ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-225030 --alsologtostderr -v=1]
E1101 22:52:59.863749   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:53:00.504215   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-225030 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 50444: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-225030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (233.900601ms)

                                                
                                                
-- stdout --
	* [functional-225030] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 22:52:47.293263   48116 out.go:296] Setting OutFile to fd 1 ...
	I1101 22:52:47.293400   48116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:52:47.293416   48116 out.go:309] Setting ErrFile to fd 2...
	I1101 22:52:47.293423   48116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:52:47.293523   48116 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 22:52:47.294010   48116 out.go:303] Setting JSON to false
	I1101 22:52:47.295119   48116 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2113,"bootTime":1667341054,"procs":515,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 22:52:47.295178   48116 start.go:126] virtualization: kvm guest
	I1101 22:52:47.298144   48116 out.go:177] * [functional-225030] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 22:52:47.299955   48116 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 22:52:47.299890   48116 notify.go:220] Checking for updates...
	I1101 22:52:47.303063   48116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 22:52:47.304785   48116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 22:52:47.306314   48116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 22:52:47.307774   48116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 22:52:47.309742   48116 config.go:180] Loaded profile config "functional-225030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 22:52:47.310166   48116 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 22:52:47.342503   48116 docker.go:137] docker version: linux-20.10.21
	I1101 22:52:47.342593   48116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 22:52:47.443712   48116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 22:52:47.365077427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 22:52:47.443819   48116 docker.go:254] overlay module found
	I1101 22:52:47.446460   48116 out.go:177] * Using the docker driver based on existing profile
	I1101 22:52:47.448017   48116 start.go:282] selected driver: docker
	I1101 22:52:47.448042   48116 start.go:808] validating driver "docker" against &{Name:functional-225030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-225030 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 22:52:47.448162   48116 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 22:52:47.450664   48116 out.go:177] 
	W1101 22:52:47.452158   48116 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 22:52:47.453628   48116 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225030 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-225030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (279.177478ms)

                                                
                                                
-- stdout --
	* [functional-225030] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 22:52:47.923644   48444 out.go:296] Setting OutFile to fd 1 ...
	I1101 22:52:47.923957   48444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:52:47.923973   48444 out.go:309] Setting ErrFile to fd 2...
	I1101 22:52:47.923982   48444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 22:52:47.924228   48444 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 22:52:47.925004   48444 out.go:303] Setting JSON to false
	I1101 22:52:47.926650   48444 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2114,"bootTime":1667341054,"procs":522,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 22:52:47.926742   48444 start.go:126] virtualization: kvm guest
	I1101 22:52:47.929518   48444 out.go:177] * [functional-225030] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	I1101 22:52:47.931211   48444 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 22:52:47.931176   48444 notify.go:220] Checking for updates...
	I1101 22:52:47.932902   48444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 22:52:47.934695   48444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 22:52:47.936468   48444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 22:52:47.938144   48444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 22:52:47.940171   48444 config.go:180] Loaded profile config "functional-225030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 22:52:47.940761   48444 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 22:52:47.977789   48444 docker.go:137] docker version: linux-20.10.21
	I1101 22:52:47.977901   48444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 22:52:48.095101   48444 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 22:52:48.00417458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 22:52:48.095201   48444 docker.go:254] overlay module found
	I1101 22:52:48.098489   48444 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1101 22:52:48.100032   48444 start.go:282] selected driver: docker
	I1101 22:52:48.100055   48444 start.go:808] validating driver "docker" against &{Name:functional-225030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-225030 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-
policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1101 22:52:48.100167   48444 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 22:52:48.102687   48444 out.go:177] 
	W1101 22:52:48.104288   48444 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 22:52:48.105782   48444 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 status
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-225030 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-225030 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-6wdjt" [a2bea829-90e3-4df3-8f1f-fe3cde59d228] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-6wdjt" [a2bea829-90e3-4df3-8f1f-fe3cde59d228] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.01253252s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.49.2:32375
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:32375
--- PASS: TestFunctional/parallel/ServiceCmd (12.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-225030 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-225030 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-nzttp" [bb9043bf-c1ba-416b-b0b4-2b0c372379f0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-nzttp" [bb9043bf-c1ba-416b-b0b4-2b0c372379f0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006338789s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 service hello-node-connect --url
E1101 22:52:59.224244   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:52:59.230811   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:52:59.241793   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:52:59.262045   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:52:59.302312   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:52:59.382621   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:52:59.542742   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:32365
functional_test.go:1605: http://192.168.49.2:32365: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-nzttp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32365
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [84890375-a873-4138-87f0-d764008889d0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009194102s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-225030 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-225030 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-225030 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-225030 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [c782f65b-d2b9-4e71-a6d6-c97e8cc5bcde] Pending
helpers_test.go:342: "sp-pod" [c782f65b-d2b9-4e71-a6d6-c97e8cc5bcde] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c782f65b-d2b9-4e71-a6d6-c97e8cc5bcde] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.006403659s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-225030 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-225030 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-225030 delete -f testdata/storage-provisioner/pod.yaml: (1.535612705s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-225030 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [57d7e546-0759-42bd-8df1-5ffcfeefd107] Pending
helpers_test.go:342: "sp-pod" [57d7e546-0759-42bd-8df1-5ffcfeefd107] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [57d7e546-0759-42bd-8df1-5ffcfeefd107] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.006778831s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-225030 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh -n functional-225030 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 cp functional-225030:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4107383733/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh -n functional-225030 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-225030 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-qfcjc" [df517582-6c6e-47da-bce9-1238b45ba5cd] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-qfcjc" [df517582-6c6e-47da-bce9-1238b45ba5cd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-qfcjc" [df517582-6c6e-47da-bce9-1238b45ba5cd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.01781639s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;": exit status 1 (289.808678ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;": exit status 1 (304.428341ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;": exit status 1 (234.733652ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;": exit status 1 (134.364686ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-225030 exec mysql-596b7fcdbf-qfcjc -- mysql -ppassword -e "show databases;"
2022/11/01 22:53:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (24.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/12840/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /etc/test/nested/copy/12840/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/12840.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /etc/ssl/certs/12840.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/12840.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /usr/share/ca-certificates/12840.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/128402.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /etc/ssl/certs/128402.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/128402.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /usr/share/ca-certificates/128402.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-225030 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh "sudo systemctl is-active docker": exit status 1 (388.444458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh "sudo systemctl is-active crio": exit status 1 (344.17122ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 version -o=json --components: (1.058418146s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225030 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-225030
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-225030
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225030 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| docker.io/library/minikube-local-cache-test | functional-225030  | sha256:1465a1 | 1.74kB |
| docker.io/library/mysql                     | 5.7                | sha256:149052 | 144MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| docker.io/library/nginx                     | alpine             | sha256:b99730 | 10.2MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| docker.io/library/nginx                     | latest             | sha256:76c69f | 56.8MB |
| gcr.io/google-containers/addon-resizer      | functional-225030  | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
|---------------------------------------------|--------------------|---------------|--------|
E1101 22:53:09.465417   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225030 image ls --format json:
[{"id":"sha256:76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":["docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f"],"repoTags":["docker.io/library/nginx:latest"],"size":"56841090"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-225030"],"size":"10823156"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"20265805"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha
256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDi
gests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:1465a1c0c5a8f8f297fbea583d4ca0c4a3068f1523c4a016e0c3c89fbff76be5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-225030"],"size":"1737"},{"id":"sha256:14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238","repoDigests":["docker.io/library/mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04"],"repoTags":["docker.io/library/mysql:5.7"],"size":"144343859"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d
03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"34238163"},{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"},{"id":"sha256:b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":["docker.io/library/nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10243852"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gc
r.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225030 image ls --format yaml:
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests:
- docker.io/library/nginx@sha256:2452715dd322b3273419652b7721b64aa60305f606ef7a674ae28b6f12d155a3
repoTags:
- docker.io/library/nginx:alpine
size: "10243852"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-225030
size: "10823156"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238
repoDigests:
- docker.io/library/mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04
repoTags:
- docker.io/library/mysql:5.7
size: "144343859"
- id: sha256:76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests:
- docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
repoTags:
- docker.io/library/nginx:latest
size: "56841090"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:1465a1c0c5a8f8f297fbea583d4ca0c4a3068f1523c4a016e0c3c89fbff76be5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-225030
size: "1737"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh pgrep buildkitd: exit status 1 (336.439924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image build -t localhost/my-image:functional-225030 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 image build -t localhost/my-image:functional-225030 testdata/build: (3.004046688s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-225030 image build -t localhost/my-image:functional-225030 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.2s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c03374966a6fe58a0d75813dfc4a8643d7cc3e61aadedb881e8d16d2746ce862 0.0s done
#8 exporting config sha256:76ea0495f6c0d2e532773377cbff74d74d12891e66166bf0bb2d6112e5eacef2 0.0s done
#8 naming to localhost/my-image:functional-225030 done
#8 DONE 0.1s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.441848788s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-225030
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image load --daemon gcr.io/google-containers/addon-resizer:functional-225030

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 image load --daemon gcr.io/google-containers/addon-resizer:functional-225030: (3.715468174s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-225030 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-225030 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [d4a2ce4f-0e32-4ad0-8b3d-a4dc1246bb36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [d4a2ce4f-0e32-4ad0-8b3d-a4dc1246bb36] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007138204s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image load --daemon gcr.io/google-containers/addon-resizer:functional-225030

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 image load --daemon gcr.io/google-containers/addon-resizer:functional-225030: (3.447439881s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.336478487s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-225030
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image load --daemon gcr.io/google-containers/addon-resizer:functional-225030

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 image load --daemon gcr.io/google-containers/addon-resizer:functional-225030: (4.021635606s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "410.003305ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "103.25216ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-225030 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.98.203.249 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-225030 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225030 /tmp/TestFunctionalparallelMountCmdany-port560716652/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667343166571009105" to /tmp/TestFunctionalparallelMountCmdany-port560716652/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667343166571009105" to /tmp/TestFunctionalparallelMountCmdany-port560716652/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667343166571009105" to /tmp/TestFunctionalparallelMountCmdany-port560716652/001/test-1667343166571009105
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (402.719502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 22:52 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 22:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 22:52 test-1667343166571009105
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh cat /mount-9p/test-1667343166571009105

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-225030 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [b569d4cd-b29c-4df7-b816-f1c79c7e31e0] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [b569d4cd-b29c-4df7-b816-f1c79c7e31e0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [b569d4cd-b29c-4df7-b816-f1c79c7e31e0] Running
E1101 22:53:01.784614   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [b569d4cd-b29c-4df7-b816-f1c79c7e31e0] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:342: "busybox-mount" [b569d4cd-b29c-4df7-b816-f1c79c7e31e0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.007074608s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-225030 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh stat /mount-9p/created-by-test
E1101 22:53:04.344748   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225030 /tmp/TestFunctionalparallelMountCmdany-port560716652/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "408.720441ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "72.14804ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image save gcr.io/google-containers/addon-resizer:functional-225030 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image rm gcr.io/google-containers/addon-resizer:functional-225030

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-225030 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.005883928s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-225030
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 image save --daemon gcr.io/google-containers/addon-resizer:functional-225030

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-225030
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225030 /tmp/TestFunctionalparallelMountCmdspecific-port82714652/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (477.783443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225030 /tmp/TestFunctionalparallelMountCmdspecific-port82714652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-225030 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225030 ssh "sudo umount -f /mount-9p": exit status 1 (468.447721ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-225030 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225030 /tmp/TestFunctionalparallelMountCmdspecific-port82714652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-225030
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-225030
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-225030
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (72.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-225316 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1101 22:53:19.705934   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:53:40.186576   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:54:21.147138   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-225316 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m12.528953295s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (72.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons enable ingress --alsologtostderr -v=5: (13.174929571s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (42.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-225316 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-225316 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.005492764s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-225316 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-225316 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [0bd7e40e-4659-4f15-93a0-faa5341e012a] Pending
helpers_test.go:342: "nginx" [0bd7e40e-4659-4f15-93a0-faa5341e012a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [0bd7e40e-4659-4f15-93a0-faa5341e012a] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005601094s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225316 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context ingress-addon-legacy-225316 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225316 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons disable ingress-dns --alsologtostderr -v=1: (8.889234706s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons disable ingress --alsologtostderr -v=1
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-225316 addons disable ingress --alsologtostderr -v=1: (7.260545936s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (42.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-225527 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1101 22:55:43.067913   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-225527 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (46.41443068s)
--- PASS: TestJSONOutput/start/Command (46.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-225527 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-225527 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-225527 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-225527 --output=json --user=testUser: (5.821490599s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-225625 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-225625 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.282646ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db455288-ff45-4958-a349-7df1440c6050","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-225625] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"07298569-cee1-4bc6-9b4e-34d3b9ab9ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15232"}}
	{"specversion":"1.0","id":"7f611ad6-94de-4945-bc31-346caf0d9ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c3abf5eb-5cf2-471d-8df5-0893748fcfd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig"}}
	{"specversion":"1.0","id":"4370dfca-0597-408b-a73b-78e1793f2a41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube"}}
	{"specversion":"1.0","id":"d0d6de92-80cc-4b23-bf9a-56f0eefa30b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"72e71b0d-4b30-4c5c-a28f-698151324017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-225625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-225625
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-225626 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-225626 --network=: (31.600752197s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-225626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-225626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-225626: (2.202250182s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-225700 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-225700 --network=bridge: (25.468446654s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-225700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-225700
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-225700: (1.986776536s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.48s)

                                                
                                    
x
+
TestKicExistingNetwork (30.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-225727 --network=existing-network
E1101 22:57:32.185421   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.190727   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.200991   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.221291   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.261631   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.342001   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.502686   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:32.956573   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:33.597488   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:34.878008   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:37.439504   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:42.560334   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 22:57:52.800611   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-225727 --network=existing-network: (28.44742976s)
helpers_test.go:175: Cleaning up "existing-network-225727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-225727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-225727: (2.062552394s)
--- PASS: TestKicExistingNetwork (30.67s)

                                                
                                    
x
+
TestKicCustomSubnet (29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-225758 --subnet=192.168.60.0/24
E1101 22:57:59.224715   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 22:58:13.281008   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-225758 --subnet=192.168.60.0/24: (26.809171523s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-225758 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-225758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-225758
E1101 22:58:26.909574   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-225758: (2.17033945s)
--- PASS: TestKicCustomSubnet (29.00s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (63.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-225827 --driver=docker  --container-runtime=containerd
E1101 22:58:54.242433   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-225827 --driver=docker  --container-runtime=containerd: (33.039501981s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-225827 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-225827 --driver=docker  --container-runtime=containerd: (24.895002737s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-225827
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-225827
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-225827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-225827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-225827: (1.897539294s)
helpers_test.go:175: Cleaning up "first-225827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-225827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-225827: (2.210279025s)
--- PASS: TestMinikubeProfile (63.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-225930 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-225930 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.745981552s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-225930 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-225930 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-225930 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.967208971s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-225930 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-225930 --alsologtostderr -v=5
E1101 22:59:42.406776   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:42.412068   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:42.422330   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:42.442578   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:42.483314   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-225930 --alsologtostderr -v=5: (1.704807822s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-225930 ssh -- ls /minikube-host
E1101 22:59:42.563831   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:42.724215   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-225930
E1101 22:59:43.045249   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:43.686187   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-225930: (1.236163899s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-225930
E1101 22:59:44.967326   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 22:59:47.527702   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-225930: (5.569175423s)
--- PASS: TestMountStart/serial/RestartStopped (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-225930 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (88.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225952 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1101 23:00:02.888938   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:00:16.162577   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 23:00:23.369680   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:01:04.331080   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225952 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m28.42751384s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (88.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-225952 -- rollout status deployment/busybox: (2.755775879s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-5bwlf -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-vskvq -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-5bwlf -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-vskvq -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-5bwlf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-vskvq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-5bwlf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-5bwlf -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-vskvq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-225952 -- exec busybox-65db55d5d6-vskvq -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-225952 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-225952 -v 3 --alsologtostderr: (30.575695937s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.27s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp testdata/cp-test.txt multinode-225952:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4059101827/001/cp-test_multinode-225952.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952:/home/docker/cp-test.txt multinode-225952-m02:/home/docker/cp-test_multinode-225952_multinode-225952-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m02 "sudo cat /home/docker/cp-test_multinode-225952_multinode-225952-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952:/home/docker/cp-test.txt multinode-225952-m03:/home/docker/cp-test_multinode-225952_multinode-225952-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m03 "sudo cat /home/docker/cp-test_multinode-225952_multinode-225952-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp testdata/cp-test.txt multinode-225952-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4059101827/001/cp-test_multinode-225952-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952-m02:/home/docker/cp-test.txt multinode-225952:/home/docker/cp-test_multinode-225952-m02_multinode-225952.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952 "sudo cat /home/docker/cp-test_multinode-225952-m02_multinode-225952.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952-m02:/home/docker/cp-test.txt multinode-225952-m03:/home/docker/cp-test_multinode-225952-m02_multinode-225952-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m03 "sudo cat /home/docker/cp-test_multinode-225952-m02_multinode-225952-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp testdata/cp-test.txt multinode-225952-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4059101827/001/cp-test_multinode-225952-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt multinode-225952:/home/docker/cp-test_multinode-225952-m03_multinode-225952.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952 "sudo cat /home/docker/cp-test_multinode-225952-m03_multinode-225952.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt multinode-225952-m02:/home/docker/cp-test_multinode-225952-m03_multinode-225952-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 ssh -n multinode-225952-m02 "sudo cat /home/docker/cp-test_multinode-225952-m03_multinode-225952-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-225952 node stop m03: (1.237099483s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225952 status: exit status 7 (558.620051ms)

                                                
                                                
-- stdout --
	multinode-225952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-225952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-225952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr: exit status 7 (543.830433ms)

                                                
                                                
-- stdout --
	multinode-225952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-225952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-225952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 23:02:11.975552  104606 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:02:11.975654  104606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:02:11.975662  104606 out.go:309] Setting ErrFile to fd 2...
	I1101 23:02:11.975667  104606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:02:11.975770  104606 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:02:11.975921  104606 out.go:303] Setting JSON to false
	I1101 23:02:11.975954  104606 mustload.go:65] Loading cluster: multinode-225952
	I1101 23:02:11.975987  104606 notify.go:220] Checking for updates...
	I1101 23:02:11.976740  104606 config.go:180] Loaded profile config "multinode-225952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:02:11.976791  104606 status.go:255] checking status of multinode-225952 ...
	I1101 23:02:11.978353  104606 cli_runner.go:164] Run: docker container inspect multinode-225952 --format={{.State.Status}}
	I1101 23:02:12.005534  104606 status.go:330] multinode-225952 host status = "Running" (err=<nil>)
	I1101 23:02:12.005566  104606 host.go:66] Checking if "multinode-225952" exists ...
	I1101 23:02:12.005774  104606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-225952
	I1101 23:02:12.028764  104606 host.go:66] Checking if "multinode-225952" exists ...
	I1101 23:02:12.029039  104606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:02:12.029083  104606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-225952
	I1101 23:02:12.051707  104606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/multinode-225952/id_rsa Username:docker}
	I1101 23:02:12.131966  104606 ssh_runner.go:195] Run: systemctl --version
	I1101 23:02:12.135345  104606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:02:12.143844  104606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:02:12.237702  104606 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-11-01 23:02:12.163992307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:02:12.238254  104606 kubeconfig.go:92] found "multinode-225952" server: "https://192.168.58.2:8443"
	I1101 23:02:12.238280  104606 api_server.go:165] Checking apiserver status ...
	I1101 23:02:12.238307  104606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 23:02:12.247376  104606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1194/cgroup
	I1101 23:02:12.254635  104606 api_server.go:181] apiserver freezer: "4:freezer:/docker/f7415a16ac942dfad0bd377bf6efe81ef2e4e8d5c5fe0484b779ff4d245b0ba2/kubepods/burstable/pod5287e2d90f617feaa5423c9303cb58f8/82d739b3a652c51584fe10d66709693ee5813f877c9841adb81066231a1bf21f"
	I1101 23:02:12.254686  104606 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f7415a16ac942dfad0bd377bf6efe81ef2e4e8d5c5fe0484b779ff4d245b0ba2/kubepods/burstable/pod5287e2d90f617feaa5423c9303cb58f8/82d739b3a652c51584fe10d66709693ee5813f877c9841adb81066231a1bf21f/freezer.state
	I1101 23:02:12.261327  104606 api_server.go:203] freezer state: "THAWED"
	I1101 23:02:12.261358  104606 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1101 23:02:12.265696  104606 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1101 23:02:12.265719  104606 status.go:421] multinode-225952 apiserver status = Running (err=<nil>)
	I1101 23:02:12.265736  104606 status.go:257] multinode-225952 status: &{Name:multinode-225952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 23:02:12.265757  104606 status.go:255] checking status of multinode-225952-m02 ...
	I1101 23:02:12.265975  104606 cli_runner.go:164] Run: docker container inspect multinode-225952-m02 --format={{.State.Status}}
	I1101 23:02:12.290731  104606 status.go:330] multinode-225952-m02 host status = "Running" (err=<nil>)
	I1101 23:02:12.290753  104606 host.go:66] Checking if "multinode-225952-m02" exists ...
	I1101 23:02:12.291003  104606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-225952-m02
	I1101 23:02:12.314044  104606 host.go:66] Checking if "multinode-225952-m02" exists ...
	I1101 23:02:12.314305  104606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 23:02:12.314341  104606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-225952-m02
	I1101 23:02:12.335887  104606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/multinode-225952-m02/id_rsa Username:docker}
	I1101 23:02:12.419809  104606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 23:02:12.428631  104606 status.go:257] multinode-225952-m02 status: &{Name:multinode-225952-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 23:02:12.428671  104606 status.go:255] checking status of multinode-225952-m03 ...
	I1101 23:02:12.428925  104606 cli_runner.go:164] Run: docker container inspect multinode-225952-m03 --format={{.State.Status}}
	I1101 23:02:12.453176  104606 status.go:330] multinode-225952-m03 host status = "Stopped" (err=<nil>)
	I1101 23:02:12.453207  104606 status.go:343] host is not running, skipping remaining checks
	I1101 23:02:12.453216  104606 status.go:257] multinode-225952-m03 status: &{Name:multinode-225952-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 node start m03 --alsologtostderr
E1101 23:02:26.252591   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:02:32.185007   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-225952 node start m03 --alsologtostderr: (29.971059121s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (155.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225952
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-225952
E1101 23:02:59.225147   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 23:03:00.002967   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-225952: (41.004203474s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225952 --wait=true -v=8 --alsologtostderr
E1101 23:04:42.407051   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:05:10.092887   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225952 --wait=true -v=8 --alsologtostderr: (1m54.053165114s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225952
--- PASS: TestMultiNode/serial/RestartKeepsNodes (155.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-225952 node delete m03: (4.247926689s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-225952 stop: (39.825795642s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225952 status: exit status 7 (112.230153ms)

                                                
                                                
-- stdout --
	multinode-225952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-225952-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr: exit status 7 (115.718285ms)

                                                
                                                
-- stdout --
	multinode-225952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-225952-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 23:06:03.292714  115263 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:06:03.292821  115263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:06:03.292832  115263 out.go:309] Setting ErrFile to fd 2...
	I1101 23:06:03.292837  115263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:06:03.292942  115263 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:06:03.293089  115263 out.go:303] Setting JSON to false
	I1101 23:06:03.293118  115263 mustload.go:65] Loading cluster: multinode-225952
	I1101 23:06:03.293155  115263 notify.go:220] Checking for updates...
	I1101 23:06:03.293604  115263 config.go:180] Loaded profile config "multinode-225952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:06:03.293626  115263 status.go:255] checking status of multinode-225952 ...
	I1101 23:06:03.294031  115263 cli_runner.go:164] Run: docker container inspect multinode-225952 --format={{.State.Status}}
	I1101 23:06:03.321070  115263 status.go:330] multinode-225952 host status = "Stopped" (err=<nil>)
	I1101 23:06:03.321092  115263 status.go:343] host is not running, skipping remaining checks
	I1101 23:06:03.321099  115263 status.go:257] multinode-225952 status: &{Name:multinode-225952 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 23:06:03.321120  115263 status.go:255] checking status of multinode-225952-m02 ...
	I1101 23:06:03.321329  115263 cli_runner.go:164] Run: docker container inspect multinode-225952-m02 --format={{.State.Status}}
	I1101 23:06:03.342815  115263 status.go:330] multinode-225952-m02 host status = "Stopped" (err=<nil>)
	I1101 23:06:03.342842  115263 status.go:343] host is not running, skipping remaining checks
	I1101 23:06:03.342849  115263 status.go:257] multinode-225952-m02 status: &{Name:multinode-225952-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225952 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1101 23:07:32.185733   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225952 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m35.66316809s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-225952 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-225952
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225952-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-225952-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.612681ms)

                                                
                                                
-- stdout --
	* [multinode-225952-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-225952-m02' is duplicated with machine name 'multinode-225952-m02' in profile 'multinode-225952'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-225952-m03 --driver=docker  --container-runtime=containerd
E1101 23:07:59.224609   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-225952-m03 --driver=docker  --container-runtime=containerd: (23.673941322s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-225952
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-225952: exit status 80 (342.052319ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-225952
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-225952-m03 already exists in multinode-225952-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-225952-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-225952-m03: (1.946617118s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.12s)

                                                
                                    
x
+
TestScheduledStopUnix (99.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-231406 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-231406 --memory=2048 --driver=docker  --container-runtime=containerd: (22.834634933s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231406 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-231406 -n scheduled-stop-231406
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231406 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231406 --cancel-scheduled
E1101 23:14:42.406931   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-231406 -n scheduled-stop-231406
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-231406
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-231406 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-231406
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-231406: exit status 7 (89.281847ms)

                                                
                                                
-- stdout --
	scheduled-stop-231406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-231406 -n scheduled-stop-231406
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-231406 -n scheduled-stop-231406: exit status 7 (87.047151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-231406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-231406
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-231406: (4.919194271s)
--- PASS: TestScheduledStopUnix (99.44s)

                                                
                                    
x
+
TestInsufficientStorage (15.37s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-231545 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-231545 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.829367187s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54c420c2-5af8-4d03-803a-e7ea2cfa92f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-231545] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a867f94-9813-456a-9824-664e670cd048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15232"}}
	{"specversion":"1.0","id":"b3229a6c-5f51-4b85-b399-4296bc9ad030","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcae4460-5367-48a1-b4f4-85be4143c519","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig"}}
	{"specversion":"1.0","id":"6d0c4b46-ed71-4af6-9b09-2aa835bef711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube"}}
	{"specversion":"1.0","id":"eef98794-dd90-497e-a2f1-80e9e2303e37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aa343d8e-5c0e-45d4-bbd7-27409998375b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"df09a497-570b-4705-9f8c-fbdb827c36e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9852760a-abdf-4a0b-a94f-3ebc00db40ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"33a1aab2-8e67-42ad-a49f-447f58517333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"702bd170-ddcf-44de-b14c-c9b2dc2c7431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-231545 in cluster insufficient-storage-231545","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f05b1f16-9fb8-444a-8c9f-bdee238a1cee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7af519c0-f8a3-4fb9-9ebc-b3ffd3b4e3c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a13e41d7-4795-42d3-8b6f-78208bc8c593","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-231545 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-231545 --output=json --layout=cluster: exit status 7 (331.030649ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-231545","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-231545","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 23:15:54.983244  138449 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-231545" does not appear in /home/jenkins/minikube-integration/15232-6112/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-231545 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-231545 --output=json --layout=cluster: exit status 7 (322.319274ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-231545","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-231545","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 23:15:55.305727  138558 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-231545" does not appear in /home/jenkins/minikube-integration/15232-6112/kubeconfig
	E1101 23:15:55.313788  138558 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/insufficient-storage-231545/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-231545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-231545
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-231545: (5.882290806s)
--- PASS: TestInsufficientStorage (15.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (148.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1169348673.exe start -p running-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1101 23:16:05.454016   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.1169348673.exe start -p running-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 100 (1m0.614648876s)

                                                
                                                
-- stdout --
	* [running-upgrade-231601] minikube v1.16.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig1842241455
	* Using the docker driver based on user configuration
	* minikube 1.27.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node running-upgrade-231601 in cluster running-upgrade-231601
	* Pulling base image ...
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 4.00 MiB / 902.99 MiB [>__] 0.44% ? p/s ?    > preloaded-images-k8s-v8-v1....: 8.00 MiB / 902.99 MiB [>__] 0.89% ? p/s ?    > preloaded-images-k8s-v8-v1....: 8.00 MiB / 902.99 MiB [>__] 0.89% ? p/s ?    > preloaded-images-k8s-v8-v1....: 26.52 MiB / 902.99 MiB  2.94% 37.53 MiB p    > preloaded-images-k8s-v8-v1....: 67.99 MiB / 902.99 MiB  7.53% 37.53 MiB p    > preloaded-images-k8s-v8-v1....: 88.52 MiB / 902.99 MiB  9.80% 37.53 MiB p    > preloaded-images-k8s-v8-v1....: 104.00 MiB / 902.99 MiB  11.52% 43.44 MiB    > preloaded-images-k8s-v8-v1....: 136.00 MiB / 902.99 MiB  15.06% 43.44 MiB    > preloaded-images-k8s-v8-v1....: 168.00 MiB / 902.99 MiB  18.60% 43.44 MiB    > preloaded-images-k8s-v8-v1....: 185.94 MiB / 902.99 MiB  20.59% 49.45 MiB    > preloaded-images-k8s-v8-v1....: 208.20 MiB / 902.99 MiB  23.06% 49.45 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 902.99 MiB  25.69% 49.45 MiB    > preloaded-images-k8s-v8-v1....: 257.24 MiB / 902.99 MiB  28.4
9% 53.92 MiB    > preloaded-images-k8s-v8-v1....: 280.00 MiB / 902.99 MiB  31.01% 53.92 MiB    > preloaded-images-k8s-v8-v1....: 304.00 MiB / 902.99 MiB  33.67% 53.92 MiB    > preloaded-images-k8s-v8-v1....: 320.00 MiB / 902.99 MiB  35.44% 57.19 MiB    > preloaded-images-k8s-v8-v1....: 355.17 MiB / 902.99 MiB  39.33% 57.19 MiB    > preloaded-images-k8s-v8-v1....: 379.87 MiB / 902.99 MiB  42.07% 57.19 MiB    > preloaded-images-k8s-v8-v1....: 400.00 MiB / 902.99 MiB  44.30% 62.10 MiB    > preloaded-images-k8s-v8-v1....: 419.46 MiB / 902.99 MiB  46.45% 62.10 MiB    > preloaded-images-k8s-v8-v1....: 440.00 MiB / 902.99 MiB  48.73% 62.10 MiB    > preloaded-images-k8s-v8-v1....: 472.00 MiB / 902.99 MiB  52.27% 65.84 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.99 MiB  55.81% 65.84 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.99 MiB  55.81% 65.84 MiB    > preloaded-images-k8s-v8-v1....: 536.00 MiB / 902.99 MiB  59.36% 68.47 MiB    > preloaded-images-k8s-v8-v1....: 568.00 MiB / 902.99 MiB  6
2.90% 68.47 MiB    > preloaded-images-k8s-v8-v1....: 576.16 MiB / 902.99 MiB  63.81% 68.47 MiB    > preloaded-images-k8s-v8-v1....: 600.00 MiB / 902.99 MiB  66.45% 70.94 MiB    > preloaded-images-k8s-v8-v1....: 632.00 MiB / 902.99 MiB  69.99% 70.94 MiB    > preloaded-images-k8s-v8-v1....: 658.01 MiB / 902.99 MiB  72.87% 70.94 MiB    > preloaded-images-k8s-v8-v1....: 678.20 MiB / 902.99 MiB  75.11% 74.77 MiB    > preloaded-images-k8s-v8-v1....: 704.00 MiB / 902.99 MiB  77.96% 74.77 MiB    > preloaded-images-k8s-v8-v1....: 736.00 MiB / 902.99 MiB  81.51% 74.77 MiB    > preloaded-images-k8s-v8-v1....: 762.01 MiB / 902.99 MiB  84.39% 78.96 MiB    > preloaded-images-k8s-v8-v1....: 768.00 MiB / 902.99 MiB  85.05% 78.96 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB  88.59% 78.96 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 81.39 MiB    > preloaded-images-k8s-v8-v1....: 848.94 MiB / 902.99 MiB  94.01% 81.39 MiB    > preloaded-images-k8s-v8-v1....: 856.00 MiB / 902.99 MiB
94.80% 81.39 MiB    > preloaded-images-k8s-v8-v1....: 888.00 MiB / 902.99 MiB  98.34% 82.16 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 114.02 M    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubectl: 4.00 MiB / 38.37 MiB [-->_______________________] 10.43% ? p/s ?    > kubelet: 4.00 MiB / 108.69 MiB [>_________________________] 3.68% ? p/s ?    > kubeadm: 4.00 MiB / 37.40 MiB [-->_______________________] 10.69% ? p/s ?    > kubectl: 38.37 MiB / 38.37 MiB [--------------] 100.00% 205.05 MiB p/s 0s    > kubelet: 32.00 MiB / 108.69 MiB [------->________________] 29.44% ? p/s ?    > kubeadm: 37.40 MiB / 37.40 MiB [--------------] 100.00% 223.13 MiB p/s 0s    > kubelet: 64.00 MiB / 108.69 MiB [-------------->_________] 58.88% ? p/s ?    > kubelet: 76.80 MiB / 108.69 MiB [------->__] 70.66%
120.46 MiB p/s ETA 0s    > kubelet: 108.69 MiB / 108.69 MiB [------------] 100.00% 148.22 MiB p/s 1sX Exiting due to K8S_INSTALL_FAILED: updating control plane: downloading binaries: downloading kubeadm: download failed: https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/amd64/kubeadm.sha256: rename /home/jenkins/minikube-integration/15232-6112/.minikube/cache/linux/v1.20.0/kubeadm.download /home/jenkins/minikube-integration/15232-6112/.minikube/cache/linux/v1.20.0/kubeadm: no such file or directory
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1169348673.exe start -p running-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1169348673.exe start -p running-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (25.839476747s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-231601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-231601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.655150878s)
helpers_test.go:175: Cleaning up "running-upgrade-231601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-231601

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-231601: (2.637961592s)
--- PASS: TestRunningBinaryUpgrade (148.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (138.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.162410088.exe start -p missing-upgrade-231719 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.162410088.exe start -p missing-upgrade-231719 --memory=2200 --driver=docker  --container-runtime=containerd: (1m21.27519324s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-231719

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-231719: (10.501491255s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-231719
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-231719 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-231719 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.525705963s)
helpers_test.go:175: Cleaning up "missing-upgrade-231719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-231719
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-231719: (2.204121698s)
--- PASS: TestMissingContainerUpgrade (138.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231601 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-231601 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (96.068079ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-231601] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231601 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231601 --driver=docker  --container-runtime=containerd: (33.461019335s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-231601 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1966773302.exe start -p stopped-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.1966773302.exe start -p stopped-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 100 (1m1.060716529s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-231601] minikube v1.16.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig3798718416
	* Using the docker driver based on user configuration
	* minikube 1.27.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.27.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node stopped-upgrade-231601 in cluster stopped-upgrade-231601
	* Pulling base image ...
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 4.00 MiB / 902.99 MiB [>__] 0.44% ? p/s ?    > preloaded-images-k8s-v8-v1....: 4.00 MiB / 902.99 MiB [>__] 0.44% ? p/s ?    > preloaded-images-k8s-v8-v1....: 32.00 MiB / 902.99 MiB [>_] 3.54% ? p/s ?    > preloaded-images-k8s-v8-v1....: 40.00 MiB / 902.99 MiB  4.43% 60.00 MiB p    > preloaded-images-k8s-v8-v1....: 72.00 MiB / 902.99 MiB  7.97% 60.00 MiB p    > preloaded-images-k8s-v8-v1....: 96.00 MiB / 902.99 MiB  10.63% 60.00 MiB     > preloaded-images-k8s-v8-v1....: 104.00 MiB / 902.99 MiB  11.52% 63.01 MiB    > preloaded-images-k8s-v8-v1....: 136.00 MiB / 902.99 MiB  15.06% 63.01 MiB    > preloaded-images-k8s-v8-v1....: 163.14 MiB / 902.99 MiB  18.07% 63.01 MiB    > preloaded-images-k8s-v8-v1....: 184.04 MiB / 902.99 MiB  20.38% 67.55 MiB    > preloaded-images-k8s-v8-v1....: 214.89 MiB / 902.99 MiB  23.80% 67.55 MiB    > preloaded-images-k8s-v8-v1....: 236.53 MiB / 902.99 MiB  26.19% 67.55 MiB    > preloaded-images-k8s-v8-v1....: 264.00 MiB / 902.99 MiB  29.2
4% 71.79 MiB    > preloaded-images-k8s-v8-v1....: 280.00 MiB / 902.99 MiB  31.01% 71.79 MiB    > preloaded-images-k8s-v8-v1....: 304.00 MiB / 902.99 MiB  33.67% 71.79 MiB    > preloaded-images-k8s-v8-v1....: 328.00 MiB / 902.99 MiB  36.32% 74.04 MiB    > preloaded-images-k8s-v8-v1....: 352.38 MiB / 902.99 MiB  39.02% 74.04 MiB    > preloaded-images-k8s-v8-v1....: 379.05 MiB / 902.99 MiB  41.98% 74.04 MiB    > preloaded-images-k8s-v8-v1....: 400.00 MiB / 902.99 MiB  44.30% 77.01 MiB    > preloaded-images-k8s-v8-v1....: 416.00 MiB / 902.99 MiB  46.07% 77.01 MiB    > preloaded-images-k8s-v8-v1....: 440.00 MiB / 902.99 MiB  48.73% 77.01 MiB    > preloaded-images-k8s-v8-v1....: 469.61 MiB / 902.99 MiB  52.01% 79.52 MiB    > preloaded-images-k8s-v8-v1....: 472.00 MiB / 902.99 MiB  52.27% 79.52 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.99 MiB  55.81% 79.52 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 902.99 MiB  55.81% 78.09 MiB    > preloaded-images-k8s-v8-v1....: 544.00 MiB / 902.99 MiB  6
0.24% 78.09 MiB    > preloaded-images-k8s-v8-v1....: 564.49 MiB / 902.99 MiB  62.51% 78.09 MiB    > preloaded-images-k8s-v8-v1....: 584.00 MiB / 902.99 MiB  64.67% 81.66 MiB    > preloaded-images-k8s-v8-v1....: 600.00 MiB / 902.99 MiB  66.45% 81.66 MiB    > preloaded-images-k8s-v8-v1....: 616.00 MiB / 902.99 MiB  68.22% 81.66 MiB    > preloaded-images-k8s-v8-v1....: 648.00 MiB / 902.99 MiB  71.76% 83.27 MiB    > preloaded-images-k8s-v8-v1....: 663.76 MiB / 902.99 MiB  73.51% 83.27 MiB    > preloaded-images-k8s-v8-v1....: 680.00 MiB / 902.99 MiB  75.31% 83.27 MiB    > preloaded-images-k8s-v8-v1....: 704.00 MiB / 902.99 MiB  77.96% 83.92 MiB    > preloaded-images-k8s-v8-v1....: 712.00 MiB / 902.99 MiB  78.85% 83.92 MiB    > preloaded-images-k8s-v8-v1....: 744.00 MiB / 902.99 MiB  82.39% 83.92 MiB    > preloaded-images-k8s-v8-v1....: 760.51 MiB / 902.99 MiB  84.22% 84.58 MiB    > preloaded-images-k8s-v8-v1....: 776.00 MiB / 902.99 MiB  85.94% 84.58 MiB    > preloaded-images-k8s-v8-v1....: 800.00 MiB / 902.99 MiB
88.59% 84.58 MiB    > preloaded-images-k8s-v8-v1....: 816.00 MiB / 902.99 MiB  90.37% 85.09 MiB    > preloaded-images-k8s-v8-v1....: 832.00 MiB / 902.99 MiB  92.14% 85.09 MiB    > preloaded-images-k8s-v8-v1....: 848.00 MiB / 902.99 MiB  93.91% 85.09 MiB    > preloaded-images-k8s-v8-v1....: 869.63 MiB / 902.99 MiB  96.31% 85.37 MiB    > preloaded-images-k8s-v8-v1....: 888.00 MiB / 902.99 MiB  98.34% 85.37 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 103.26 M    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubeadm: 8.00 MiB / 37.40 MiB [----->____________________] 21.39% ? p/s ?    > kubectl: 4.70 MiB / 38.37 MiB [--->______________________] 12.24% ? p/s ?    > kubelet: 8.31 MiB / 108.69 MiB [->________________________] 7.65% ? p/s ?    > kubeadm: 37.40 MiB / 37.40 MiB [--------------] 100.
00% 275.83 MiB p/s 0s    > kubectl: 38.37 MiB / 38.37 MiB [--------------] 100.00% 237.32 MiB p/s 0s    > kubelet: 44.19 MiB / 108.69 MiB [--------->______________] 40.65% ? p/s ?    > kubelet: 64.00 MiB / 108.69 MiB [-------------->_________] 58.88% ? p/s ?    > kubelet: 104.00 MiB / 108.69 MiB [-------->] 95.68% 159.40 MiB p/s ETA 0s    > kubelet: 108.69 MiB / 108.69 MiB [------------] 100.00% 172.80 MiB p/s 1sX Exiting due to K8S_INSTALL_FAILED: updating control plane: downloading binaries: downloading kubectl: download failed: https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.0/bin/linux/amd64/kubectl.sha256: rename /home/jenkins/minikube-integration/15232-6112/.minikube/cache/linux/v1.20.0/kubectl.download /home/jenkins/minikube-integration/15232-6112/.minikube/cache/linux/v1.20.0/kubectl: no such file or directory
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1966773302.exe start -p stopped-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1966773302.exe start -p stopped-upgrade-231601 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (25.753267625s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1966773302.exe -p stopped-upgrade-231601 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1966773302.exe -p stopped-upgrade-231601 stop: (1.246947554s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-231601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1101 23:17:32.185594   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 23:17:59.224420   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-231601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.806165969s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231601 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231601 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.812966523s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-231601 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-231601 status -o json: exit status 2 (449.012745ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-231601","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-231601
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-231601: (7.420076702s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231601 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231601 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.752879609s)
--- PASS: TestNoKubernetes/serial/Start (5.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-231601 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-231601 "sudo systemctl is-active --quiet service kubelet": exit status 1 (369.93425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-231601

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-231601: (3.299178483s)
--- PASS: TestNoKubernetes/serial/Stop (3.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-231601 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-231601 --driver=docker  --container-runtime=containerd: (6.787281974s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                    
x
+
TestPause/serial/Start (61.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-231710 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-231710 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m1.654495156s)
--- PASS: TestPause/serial/Start (61.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-231601 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-231601 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.64355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-231710 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-231710 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.466512594s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-231710 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-231710 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-231710 --output=json --layout=cluster: exit status 2 (577.019972ms)

                                                
                                                
-- stdout --
	{"Name":"pause-231710","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-231710","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.58s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-231710 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.43s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-231710 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-231710 --alsologtostderr -v=5: (1.427374601s)
--- PASS: TestPause/serial/PauseAgain (1.43s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-231710 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-231710 --alsologtostderr -v=5: (3.807129308s)
--- PASS: TestPause/serial/DeletePaused (3.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-231710
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-231710: exit status 1 (23.148198ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-231710

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-231601
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (1.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-231841 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-231841 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (556.400132ms)

                                                
                                                
-- stdout --
	* [false-231841] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15232
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 23:18:41.918753  176156 out.go:296] Setting OutFile to fd 1 ...
	I1101 23:18:41.918975  176156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:18:41.918988  176156 out.go:309] Setting ErrFile to fd 2...
	I1101 23:18:41.918998  176156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 23:18:41.919151  176156 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
	I1101 23:18:41.919932  176156 out.go:303] Setting JSON to false
	I1101 23:18:41.922207  176156 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3668,"bootTime":1667341054,"procs":1064,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 23:18:41.922282  176156 start.go:126] virtualization: kvm guest
	I1101 23:18:41.925353  176156 out.go:177] * [false-231841] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1101 23:18:41.927090  176156 out.go:177]   - MINIKUBE_LOCATION=15232
	I1101 23:18:41.927145  176156 notify.go:220] Checking for updates...
	I1101 23:18:41.930078  176156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 23:18:41.931631  176156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
	I1101 23:18:41.933080  176156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
	I1101 23:18:41.934446  176156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 23:18:41.936366  176156 config.go:180] Loaded profile config "force-systemd-env-231837": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1101 23:18:41.936542  176156 config.go:180] Loaded profile config "kubernetes-upgrade-231829": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1101 23:18:41.936631  176156 config.go:180] Loaded profile config "missing-upgrade-231719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I1101 23:18:41.936695  176156 driver.go:365] Setting default libvirt URI to qemu:///system
	I1101 23:18:41.969777  176156 docker.go:137] docker version: linux-20.10.21
	I1101 23:18:41.969874  176156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 23:18:42.098292  176156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:58 SystemTime:2022-11-01 23:18:41.999160767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1101 23:18:42.098426  176156 docker.go:254] overlay module found
	I1101 23:18:42.174580  176156 out.go:177] * Using the docker driver based on user configuration
	I1101 23:18:42.245515  176156 start.go:282] selected driver: docker
	I1101 23:18:42.245555  176156 start.go:808] validating driver "docker" against <nil>
	I1101 23:18:42.245583  176156 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 23:18:42.308653  176156 out.go:177] 
	W1101 23:18:42.355830  176156 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1101 23:18:42.369457  176156 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-231841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-231841
--- PASS: TestNetworkPlugins/group/false (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (121.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-231959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-231959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m1.257121174s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (121.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-232012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-232012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (49.128291091s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-232012 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [dbdc7c8d-bf19-4e02-a413-de39dfe79b50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [dbdc7c8d-bf19-4e02-a413-de39dfe79b50] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.009932795s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-232012 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-232012 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-232012 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-232012 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-232012 --alsologtostderr -v=3: (20.012969736s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232012 -n no-preload-232012
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232012 -n no-preload-232012: exit status 7 (96.29082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-232012 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-232012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-232012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m34.065248845s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232012 -n no-preload-232012
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-231959 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [48008b25-b8d3-4f25-b795-c937f877ae25] Pending
helpers_test.go:342: "busybox" [48008b25-b8d3-4f25-b795-c937f877ae25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [48008b25-b8d3-4f25-b795-c937f877ae25] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011727082s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-231959 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-231959 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-231959 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-231959 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-231959 --alsologtostderr -v=3: (20.064616153s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-231959 -n old-k8s-version-231959
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-231959 -n old-k8s-version-231959: exit status 7 (94.416671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-231959 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (431.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-231959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-231959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m11.524451771s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-231959 -n old-k8s-version-231959
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (431.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-232234 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1101 23:22:59.223997   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-232234 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (43.569929419s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-232234 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8dbbe7b9-ba30-4b46-b048-70c1e827223e] Pending
helpers_test.go:342: "busybox" [8dbbe7b9-ba30-4b46-b048-70c1e827223e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [8dbbe7b9-ba30-4b46-b048-70c1e827223e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.011277374s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-232234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-232234 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-232234 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-232234 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-232234 --alsologtostderr -v=3: (20.031490441s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-232234 -n embed-certs-232234
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-232234 -n embed-certs-232234: exit status 7 (96.207162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-232234 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (314.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-232234 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1101 23:24:42.406941   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:26:02.270777   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-232234 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m14.423036245s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-232234 -n embed-certs-232234
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (314.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-89wgn" [34d6a842-efc9-4887-8751-97903a04dd58] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-89wgn" [34d6a842-efc9-4887-8751-97903a04dd58] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.013021844s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-89wgn" [34d6a842-efc9-4887-8751-97903a04dd58] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005645685s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-232012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-232012 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-232012 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232012 -n no-preload-232012
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232012 -n no-preload-232012: exit status 2 (357.830135ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-232012 -n no-preload-232012
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-232012 -n no-preload-232012: exit status 2 (368.611915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-232012 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232012 -n no-preload-232012
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-232012 -n no-preload-232012
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-232727 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3
E1101 23:27:32.185503   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 23:27:59.224386   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-232727 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (45.190898549s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-232812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-232812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (38.246696454s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-232727 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e7ec1f24-654f-4eda-8163-330812a6a602] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e7ec1f24-654f-4eda-8163-330812a6a602] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.012083548s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-232727 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-232727 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-232727 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (20.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-232727 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-232727 --alsologtostderr -v=3: (20.060596527s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (20.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727: exit status 7 (97.089303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-232727 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (570.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-232727 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-232727 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (9m30.448110003s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (570.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-232812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-232812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-232812 --alsologtostderr -v=3: (1.25630775s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-232812 -n newest-cni-232812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-232812 -n newest-cni-232812: exit status 7 (94.550744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-232812 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-232812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-232812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.3: (29.543678894s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-232812 -n newest-cni-232812
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-2tzbh" [a07120e2-2d21-430e-84c7-928a87ae0705] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-2tzbh" [a07120e2-2d21-430e-84c7-928a87ae0705] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.012512529s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-2tzbh" [a07120e2-2d21-430e-84c7-928a87ae0705] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006924182s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-232234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-232234 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-232234 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-232234 -n embed-certs-232234
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-232234 -n embed-certs-232234: exit status 2 (386.537378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-232234 -n embed-certs-232234
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-232234 -n embed-certs-232234: exit status 2 (413.622305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-232234 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-232234 -n embed-certs-232234
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-232234 -n embed-certs-232234
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-232812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-232812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-232812 -n newest-cni-232812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-232812 -n newest-cni-232812: exit status 2 (395.303168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-232812 -n newest-cni-232812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-232812 -n newest-cni-232812: exit status 2 (403.17849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-232812 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-232812 -n newest-cni-232812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-232812 -n newest-cni-232812
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (43.812016917s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m0.510133785s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-r8gxb" [a3a9675e-e324-4630-9a15-29196d7b60c2] Running
E1101 23:29:42.407330   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010613765s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-r8gxb" [a3a9675e-e324-4630-9a15-29196d7b60c2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005129757s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-231959 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-231959 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-231959 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-231959 -n old-k8s-version-231959
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-231959 -n old-k8s-version-231959: exit status 2 (365.216478ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-231959 -n old-k8s-version-231959
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-231959 -n old-k8s-version-231959: exit status 2 (387.350309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-231959 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-231959 -n old-k8s-version-231959
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-231959 -n old-k8s-version-231959
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (110.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-231843 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-231843 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m50.245542884s)
--- PASS: TestNetworkPlugins/group/cilium/Start (110.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-231841 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-231841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-6wssr" [d81528db-e37a-441e-ad17-0dff5f0c652a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-6wssr" [d81528db-e37a-441e-ad17-0dff5f0c652a] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005464216s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-231841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-231841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-231841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-m5z95" [98ce8f9b-405f-4ccf-a1ba-823431b3abb3] Running
E1101 23:30:35.364449   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013303901s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-231841 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-231841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vwskm" [b47a58b6-df19-4485-ab37-da9b0fa61d4d] Pending
helpers_test.go:342: "netcat-5788d667bd-vwskm" [b47a58b6-df19-4485-ab37-da9b0fa61d4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-vwskm" [b47a58b6-df19-4485-ab37-da9b0fa61d4d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005911815s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-231841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-231841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-231841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (288.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1101 23:31:01.783138   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:01.788424   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:01.798652   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:01.819144   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:01.859259   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:01.939625   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:02.100028   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:02.421041   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:03.061842   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:04.342904   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:06.903819   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:12.024054   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:22.264307   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:31:42.744458   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (4m48.750768092s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (288.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-vsjsg" [5b61c983-f48e-45cf-81a9-3e51dc1781fb] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.013951714s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-231843 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-231843 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-6xks6" [a8a2157f-74f5-432d-bb8e-1dc0e3da5b6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-6xks6" [a8a2157f-74f5-432d-bb8e-1dc0e3da5b6f] Running
E1101 23:32:01.326807   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.332108   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.342369   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.362627   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.402930   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.483500   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.643750   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:01.964828   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:02.605775   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:03.886308   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.005326639s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-231843 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-231843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-231843 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E1101 23:32:11.567858   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:21.808786   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:23.704756   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/no-preload-232012/client.crt: no such file or directory
E1101 23:32:32.185860   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 23:32:42.289254   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/old-k8s-version-231959/client.crt: no such file or directory
E1101 23:32:45.454574   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-231841 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (39.060342246s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-231841 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-231841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-7q76d" [5cab3c87-4b4d-4868-bfd8-76858d704703] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-7q76d" [5cab3c87-4b4d-4868-bfd8-76858d704703] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005349035s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-231841 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-231841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-ql9vt" [b3fcc8aa-0148-4eca-b525-62a8cded7d0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-ql9vt" [b3fcc8aa-0148-4eca-b525-62a8cded7d0f] Running
E1101 23:35:42.398321   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.005914288s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-z5h7v" [7de8d9f1-7aad-4a7b-87d1-eb3dc552c91a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1101 23:38:15.999795   12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/kindnet-231841/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012472591s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-z5h7v" [7de8d9f1-7aad-4a7b-87d1-eb3dc552c91a] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006413714s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-232727 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-232727 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-232727 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727: exit status 2 (367.497694ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727: exit status 2 (367.007612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-232727 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-232727 -n default-k8s-diff-port-232727
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                    

Test skip (23/277)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-232727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-232727
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-231841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-231841
--- SKIP: TestNetworkPlugins/group/kubenet (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-231841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-231841
--- SKIP: TestNetworkPlugins/group/flannel (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-231842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-231842
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.45s)

                                                
                                    
Copied to clipboard